Learnlets

Secondary

Clark Quinn’s Learnings about Learning

A good publisher

20 June 2018 by Clark Leave a Comment

I shot this short little video because, well, I have to say that my experience with ATD has been excellent. They’ve done what I’ve needed: listening when they should, arguing with me when they should, responding to my questions, and executing on their responsibilities professionally. They’ve gone above and beyond, and I’m pleased to have them as my publisher on my most recent tome.

If you’re going to complain about the bad things (or, at least, make fun of them :), I reckon you should highlight the good ones too.  They showed their capabilities while serving as co-publisher on my last book, and now they’ve demonstrated the whole deal. Thanks, team!

A solid education platform

19 June 2018 by Clark 4 Comments

In the past couple of days, I’ve come across two different initiatives to improve education. And certainly our education system can stand improvement. However, each one had the same major flaw, and leaves open an opportunity for improvement not to occur. Over a number of engagements I’ve developed the basis of what I think is a necessary foundation for a viable education platform. It’s time to toss it out and see what you all think.

So, one initiative had a proposal of 10 different areas they wanted people to contribute in. This included AI, and personalization, and ‘out of class’ credit, and more. Which is all good, make no mistake! However, nowhere was there the option of ‘a deeper pedagogy’. And that’s a problem. It’s all too easy to chase after the latest shiny object. It makes us feel like we’re both doing something constructive and keeping up with developments. (Not to mention how much fun it is to play with the latest things.) However, gilding bad design is still bad design! We need to make sure the foundation is strong before we go further.

The other initiative has three ways to contribute: lifelong learning, a marketplace, and emerging technology. And, again, the big gap is talking about the pedagogy to begin with.  With a marketplace, you might get some Darwinian selection process, but why not put it out there from the get-go? Otherwise, it’s just cool tinkering around a broken core.

Three partsSo here’s where I pitch my 3 part story. Note that curriculum is broken too (I’m channeling Roger Schank: ‘Only two things wrong with education, what we teach and how we teach it’), and yet I’m not addressing that. Well, only a second layer of curriculum (see below ;). I think the choice of the first level curriculum is a big issue, but that changes depending on level, goals, etc. Here I’m talking about a platform for delivering the necessary elements of a supportable approach:

  1. The first element is a killer learning experience. What do I mean here? I mean an application-based learning approach. Even for so-called theory classes (e.g. typical higher ed), you  do something with this. And the experience is based upon minimal content, appropriate challenge, intrinsic motivation, and more. My claim: this is doable, even when you want to auto-mark as much as possible. Of course, there are still people in the loop.
  2. Which leads to the second element, we as the provider are a partner in your success. It’s not ‘sink or swim’, but instead we’re tracking your progress, intervening when it looks like you’re struggling, and accessible at your time and place. We’re also providing the necessary resources to succeed. And we’re not interested in a curve, we’re competency-based and want everyone to get where they can be.  We’re also making sure you’re getting what you need.
  3. And that’s the third element, we  develop you. That is, we’re not just developing your knowledge of the field, we’re also developing key success skills. That means we’re giving you chances to practice those skills as well,  and tracking them and developing them as well. This includes things like communication, collaboration, design research, and more. So-called 21C skills.

I suggest that with such an approach, and the right curriculum, you’re providing a full suite of what education  should  be about. And, I suggest, we can do this now, affordably. Technology is part of the picture, learning science is part of the picture, and the commitment to do the right thing is part of the picture. Also, I think this is viable at all levels. K12, higher ed, and workplace.

And, I’ll suggest, anything less really isn’t defensible. We have the know-how, we have the tools, all we need is the will. Yet, despite some notable steps in the right direction, we’re really not there. It’s time to put a stake in the ground. Who’s up for it?

Quip: Learning & Development

5 June 2018 by Clark 4 Comments

I’ve used this quip quite a bit, as it’s essentially the rationale for the Revolution book.  And I want to make clear what I’m saying, and then qualify it.  It’s about the state of Learning & Development, and sums up one perspective fairly succinctly:

L&D isn’t  doing near what it could and should be, and what it is doing it’s doing badly. Other than that, it’s fine.

It’s meant to be a little flip and ‘in your face’, but it’s because I think there’s such potential for L&D!  This is my way of characterizing the situation that might spark some reflection, and even action.

L&D is, largely, about courses.  And unfortunately, too often they’re about content-dump, and an experience that will rank highly on a smile sheet. Which is historically understandable, but scientifically bereft. Compliance aside (and here’s to a competency shift, away from ‘1 hour / year’ or whatever other time-based basis we might find), our courses should be focused on applying knowledge to meaningful tasks, and meaningful feedback. Sufficient, varied, spaced, and deliberate practice!  Of course this isn’t everyone’s L&D, but it certainly appears to be all too present.

The second thing is that L&D could be so much broader!  If we’re really worry about organizational performance and continuing improvement (why I suggest L&D should shift to P&D, performance and development), we should do more. Performance support, for instance, should be under the purview of L&D. Otherwise it gets left to chance or those who don’t have the necessary background.

And, then there’s coaching. Recognize that learning takes time, and that we need to continue development beyond the classroom. Thus, coaching’s critical to continued improvement. Again, L&D has a role to play here: developing coaching skills, providing guidance, and tracking.

Then we go beyond formal learning: optimizing the ongoing learning in individuals, teams, and communities. This is organizational learning! There’re processes for individual improvement like PKM, team processes like brainstorming, and community interactions. Leaving these to chance is a mistake, as we can’t assume these skills.

And the outcomes of helping the organization get better beyond the course are big. Not just individual learning, but the organization is learning faster. And that’s a necessity for success, going forward. In short, there’s a lot L&D could be doing that would help the organization that it’s missing now.

Now, complaining as my statement does isn’t necessarily useful, unless it’s constructive, and the point is that we have very comprehensive and specific things we know about doing better.  By this quip I don’t mean to criticize; I want to inspire action and improvement. So here’s to revolutionizing L&D. I hope you’ll join us!

Services

31 May 2018 by Clark Leave a Comment

From time to time, it’s worth a reminder that Quinnovation (the firm behind the blog) is available to help you.  Here are the services you can look to from me, in case you want to accelerate your success.

And a wee bit of self-promotion: if expertise comes from years of practice, how about 3+ decades of investigating the breadth and depth of learning & performance, and exploration of technology support?  Why not get assistance from where the thinking originates, not the several-steps away diluted version?

Consulting:

Learning Design: are your design processes yielding the outcomes they should and need to? I have worked with many organizations to generate or tighten learning design processes to reflect learning science (not myths). I recognize that most organizations can’t completely revamp their approaches, so I look to the small changes with the biggest impact. A white paper talks about this.

Performance Ecosystem Strategy: are you leading your organization forward in learning (read: innovation) or are you still taking orders for courses?  Based on the book, I’ve helped a number of organizations understand the full spectrum of possibilities, evaluate their situation, and prioritize short-, medium-, and long-term steps.  Another white paper talks about this.

Games & Mobile: I’ve helped a number of organizations get their minds, strategies, and design processes around mobile and/or games, based upon  those  books.

Workshops

Want to get your team up to speed on learning science, strategy, games, mobile, or more?  I have workshops on each that are interactive, engaging, and effective. Preferably, they’re coupled with followup to extend the learning (applying the learning science), and that can be done in a variety of ways.

Presentations

A number of organizations around the world have booked me to speak to their audiences. They have been about the subjects of my books, or the future of learning technology in general. And have indicated they were quite satisfied with the result ;). If you want a credible, engaging presenter around intelligence augmentation, I’m a candidate.

Writing

In addition to books, I write white papers, blog posts, and articles for others. I could do the same for you.

Coaching

If you’re a learning leader that would like assistance over time addressing your organization’s needs, it would certainly be worth a conversation. I haven’t done this formally, but it seems like a natural extension.

And, of course, there are combinations of these services as well. You can find out more at the official Quinnovation site. Next week we return you to your regularly scheduled blog at this same channel.

Nuances Matter

30 May 2018 by Clark 1 Comment

I’ve argued before that the differences between well-designed and well-produced learning, and just well-produced learning, are subtle. And, in general, nuances matter. So, in my recent book, the section on misconceptions spent a lot of time unpacking some terms. The goal there was ensuring that the nuances were understood. And a recent event triggered even more reflection on this.

Learnnovators, a company I’ve done a couple of things with (the Deeper eLearning series, and the Workplace of the Future project), interviewed me once quite a while ago. I was impressed then with the depth of their background research and thoughtful questions. And they recently asked to interview me on the book. Of course, I agreed. And again they impressed me with the depths of their questions, and I realized in this case there was something specific going on.

In their questions, they were unpacking what common concerns would be about some of the topics.  The questions dug in to ways in which people might think that the recommendations are contrary to personal experience, and more.  There were very specifically looking for ways in which folks might think to reject the findings.  And that’s important. I believe I had addressed most of them in the book, but it was worth revisiting them.

And that’s the thing that I think is important about this for our practice. We can’t just do the surface treatment. If we just say: “ok we need some content, and then let’s write a knowledge test on it”, we’ve let down our stakeholders.  If we don’t know the cognitive properties of the media we use, don’t sweat the details about feedback on assessment, don’t align the practice to the needed performance, etc., we’re not doing our job!

And I don’t mean you have to get a Ph.D. in learning science, but you really do need to know what you’re doing. Or, at least, have good checklists and quick reference guides to ensure you’re on track. Ideally, you review your processes and tools for alignment to what’s known. And the tools themselves could have support. (Ok, to a limit, I’ve seen this done to the extent of handcuffs on design.)

Nuances matter,  if you care about the outcomes (and if you don’t, why bother? ;).  I’ve been working on both a checklist and on very specific changes that apply to various places in design processes that represent the major ways folks go wrong. These problems are relatively small, and easy to fix, and are designed to yield big improvements. But unless you know what they are, you’re unlikely to have the impact you intend.

Context is key

29 May 2018 by Clark 1 Comment

Workflow learning is one of the new buzzphrases. The notion is that you deliver learning to the point of need, instead of taking people away from the workflow. And I’m a fan. But it’s not as easy as it sounds!  Context is a critical issue in making this work, and that’s non-trivial.

When we create learning experiences, typically we do (or should) create an artificial context for learners to practice in. And this makes sense when the performance has high consequences.  However, if people are in the workflow, there is a context already. Could we leverage that context for learning instead of creating one?  When would this make sense?

I’d suggest that there are two times workflow learning makes sense. For one, if the performers aren’t novices, this becomes an opportunity to provide learning at the point of need to elaborate and extend learning. Say, refining knowledge on sales, marketing, or product when touching one of them.  For another, it would make sense if the consequences aren’t high and the ease of repair is easy. So, sending on a workpiece that will get checked anyways.

Of course, we  could just do performance support, and not worry about the learning, but we can do that  and support learning as well. So, having an additional bit of learning content at the right time, whether alone or in conjunction with performance support, is a ‘good thing’.  The difficulties come when we get down to specifics.

Specifically,  how do we match the right content with the task? There are several ways. For one, it can just be pull. Here the individual asks for some additional help and/or learning. This isn’t completely trivial either, because you have to have a search mechanism that makes it easy for the performer to get the right stuff. This means federated search, vocabulary control, and more. Nothing you shouldn’t already be worrying about for pull learning anyways, but for the record.

Second, you could do push. Here it gets more dicey.  One way is to have content tied to specific instances. This can be hand done as some tools have made possible. That is, you instrument content with help where you find, or think, it could be needed. The other way is to be smart  about  the context.

And this is where it gets complicated. For such workflow learning to work, you really want to leverage the context, so you need to be able to  identify  the context.  How do you know what they’re doing? Then you need to map that context to content. You could use some signal (c.f. xAPI) that tells you when someone touches something. Then you could write rules that map that touch to the associated content. It might even by description, not hardwired, so the system’s flexible. For instance, it might change the content depending on how many times and how recently this person has done this task.  This is all just good learning engineering, but the details matter.

Making workflow learning work is a move towards a more powerful performance ecosystem and workforce, but it requires some backend effort.  Not surprising, but worth being clear on.

Real (e)Learning Heroes

24 April 2018 by Clark Leave a Comment

Super logoWhile there are people who claim to be leaders in elearning (and some are), there is another whole group who flies under the radar. These are the people who labor quietly in the background on initiatives that will benefit all of us. I’m thinking in particular of those who work to advance our standards. And they’re really heroes for what they’ve done and are doing.

The initial learning technology standards came out from the AICC.  They wanted a way to share important learning around flight, an area with a big safety burden.  Thus, they were able to come together despite competition.

Several other initiatives include IEEE (which is pretty much  the US based effort on electric and electronic technology standards to the international stage), and the IMS efforts from academia.  They were both working on content/management interoperability, when the US government put it’s foot down. The Department of Defense’s ADL initiative decided upon  a  version, to move things forward, and thus was born SCORM.

Standards are good. When standards are well-written, they support a basis upon which innovation can flourish.  Despite early hiccups, and still some issues, the ability for content written to the standards to run on any compliant platform, and vice versa, has enabled advancements. Well, except for those who were leveraging proprietary standards.  As a better example, look how the WWW standard on top of the internet standards has enabled things like, well, this blog!

Ok, so it’s not all roses.  There are representatives who, despite good intentions, also have vested interests in things going in particular directions. Their motivations might be their employers, research, or other agendas.  But the process, the mechanisms that allow for decision making, usually end up working. And if not, there’s always the ADL to wield the ‘800 lb gorilla’ argument.

Other initiatives include xAPI, sponsored by ADL to address gaps in SCORM. This standard enables tracking and analytics  beyond the course. It’s no panacea, but it’s a systematic way to accomplish a very useful goal. Ongoing is the ICICLE work on establishing principles for ‘learning engineering’, and IBSTPI  for training, performance, and instruction.  Similarly, societies such as ATD and LPI try to create standards for necessary skills (their lists are appendices in the Revolution book).

And it’s hard work!  Trying to collect, synthesize, analyze, and fill in gaps to create a viable approach requires much effort both intellectual  and social!  These people labor away for long hours, on top of their other commitments in many cases.  And in many cases their organizations support their involvement, for good as well as selfish reasons such as being first to be able to leverage the outputs.

These people are working to our benefit. It’s worth being aware, recognizing, and appreciating the work they do.  I certainly think of them as heroes, and I hope you will do so as well.

New and improved evaluation

10 April 2018 by Clark 6 Comments

A few years ago, I had a ‘debate’ with Will Thalheimer about the Kirkpatrick model (you can read it here).  In short, he didn’t like it, and I did, for different reasons.  However, the situation has changed, and it’s worth revisiting the issue of evaluation.

where kirkpatrick fitsIn the debate, I was lauding how Kirkpatrick  starts with the biz problem, and works backwards. Will attacked that the model didn’t really evaluate learning. I replied that it’s role wasn’t evaluating the effectiveness of the learning design on the learning outcome, it was assessing the impact of the learning outcome on the organizational outcome.

Fortunately, this discussion is now resolved. Will, to his credit, has released his own model (while unearthing the origins of Kirkpatrick’s work in Katzell’s).  His model is more robust, with 8 levels.  This isn’t overwhelming, as you can ignore some. Fortunately, there’re indicators as to what’s useful and what’s not!

It’s not perfect. Kirkpatrick (or Katzell? :) can relatively easily be used for other interventions (incentives, job aids, … tho’ you might not tell it from the promotional material). It’s not so obvious how to do so with his new model.  However, I reckon it’s  more robust for evaluating learning interventions. (Caveat: I may be biased, as I provided feedback.) And  should he have numbered them in reverse, which Kirkpatrick admitted might’ve been a better idea?

Evaluation is critical.  We do some, but not enough. (Smile sheets, level 1, where we ask learners what they think of the experience, has essentially zero correlation with outcomes.) We need to do a better job of evaluating our impacts (not just our efficiency). This is a more targeted model.  I draw it to your attention.

 

P&D Strategies

4 April 2018 by Clark 1 Comment

In an article, Jared Spool talks about the strategies he sees UX leaders using.  He lists 3 examples, and talks about how your strategies need to change depending on where you are in relation to the organization.  It made me think about what P&D strategies could and should be.signs

So, one of the ones he cited isn’t unique to UX, of course. That one is ‘continual mentoring’, always having someone shadowing the top person in a role. He suspects that it might slow things down a bit, but the upside is a continual up-skilling.  Back when I led a team, I had everyone have an area of responsibility, but someone backed them up.  Cynically, I called it the ‘bus’ strategy, e.g. in case someone was hit by a bus.  Of course, the real reason was to account for any variability  in the team, to create some collaborative work, to share awareness,  and to increase the team’s understanding..  This is an ‘internal’ strategy.

He cites another, about ‘socializing’ the vision.  In this one, you are collectively creating the five year vision of what learning (his was UX) looks like. The point is to get a shared buy-in to a vision, but also promotes the visibility of the group.  Here again, this is hardly unique to UX, with a small twist ;).  This is more an external strategy.  I suppose there could be two levels of ‘external’, outside P&D but inside the organization, and then an external one (e.g. with customers).

I’d add that ‘work out loud’ (aka Show Your Work) would be another internal strategy (at least to begin with).  Here, the P&D team starts working out loud, with the unit head leading the way. It both gets the P&D team experimenting with the new ways to do things, of course builds shared awareness,  and builds a base to start evangelizing  outside.

I’d love to hear the strategies you’ve used, seen used, or are contemplating, to continue and expand your ability to contribute to the organization.  What’s working?  And, for that matter, what’s not?

No all-singing all-dancing solution

3 April 2018 by Clark 1 Comment

I was pinged on LinkedIn by someone who used the entrée of hearing me speak in next week’s Learning Solutions conference to begin discussing LMS capabilities. (Hint: they provide one.)  And I thought I’d elaborate on my response, as the discussion prompted some reflections.  In short, what are the arguments for and against having a single platform to deliver an ecosystem solution?

In Revolutionize Learning & Development, I argue for a Performance & Development ccosystem. The idea is more than courses, it’s performance support, social, informal, etc. It’s about having a performer-centric support environment that has tools and information to hand to both help you perform in the moment  and develop you over time. The goal is to support working alone and together to meet both the anticipated, and unanticipated, needs of the organization.

On principle, I tend to view an ‘all singing all dancing’ solution as likely to fail on some part of that. It’s implausible that a system would have all the capabilities needed.  First, there are  many functionalities: access to formal learning, supporting access to known or found resources, sharing, collaborating, and more.  It’s unlikely that all those can be done well in one platform. Let alone, doing them in ways that matches any one organization’s ways of working.

I’m not saying the LMS is a bad tool for what it does. (Note: I am not in the LMS benchmark business; there are other people that do that and it’s a full time job.) However, can an LMS be a full solution? Even if there is some capability in all the areas, what’s the likelihood that it’s best-of-breed in all? Ok, in some small orgs where you can’t have an IT group capable of integrating the necessary tools, you might settle for working around the limitations. That’s understandable. But it’s different than choosing to trust one system. It’s just having the people act as the glue instead of the system.

It’s always about tradeoffs, and so integrating best-of-breed capabilities around what’s already in place would make more sense to me.  For instance, how *could* one system integrate enterprise-wide federated search as a stand-alone platform? It’s about integrating a suite of capabilities to create a performer-centric environment. That’s pretty much beyond a solo platform, intrinsically. Am I missing something?

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.