Clark Quinn's Learnings about Learning
(The Official Quinnovation blog)

22 March 2017

Where is Clark?

Clark @ 8:07 am

So, where am I this spring?  I was at ATD’s Techknowledge in January, and as this is published I’m on my way to Long Beach for their Core 4 event (sold out; if you’re one of the lucky ones there, say hi!). I’m taking the train (and a bus); look forward to watching the terrain roll by and writing.  But there’re a couple more events this spring.

Next week (March 30th), I’ll be giving a talk to ATD’s East Bay chapter on innovation.  It’ll cover the materials that were part of my presentation last fall to a government agency and my forthcoming CLO article.  We’ll talk about what innovation is (there’s a surprising amount of confusion), what it takes, what the barriers are, and what the role is for L&D.  If you’re here in the Bay Area, it should be fun and informative.

Then, in June, I’ll be at the eLearning Guild’s FocusOn Learning event in San Diego.  There I’ll be talking about Focus Beyond Learning, i.e. the broader performance ecosystem picture in which mobile, video, and games fit in. Again, if you’re going, say hello!  It’s also a chance to see my brother and his family (and hopefully get in a surf ;).

That’s pretty much it for the first part of the year.  A bit quiet, but providing time for some writing.   Of course, if you are needing a keynote or a workshop…let me know. I have to admit I’m thinking that workshops around the deeper cognitive aspects of learning would be a big boost to organizational L&D.

21 March 2017

Top down or bottom up strategy?

Clark @ 8:09 am

In a recent discussion around HR strategy, the question arose about where to start.  That is, if you’ve bought into moving into the digital age, where do you begin.  The flip answer from the host of the event, a large consulting agency, was to hire them (and my flip reply is to ask whether you want newly minted MBAs following a process designed to be ‘heavy’, or someone coming in light and fast with an adaptive approach ;). But then they got serious, and responded that you shouldn’t be reactive to people’s stated needs, and you needed data to identify what problems are crucial.  And I wasn’t satisfied with that, for two related reasons.  In short, I thought that was still reactive and that it wasn’t going to help you focus ahead, and that you needed top-down to complement bottom up.

This was buttressed by a post pointed out to me by my ITA colleagues that was arguing a good design strategy was to find out what people needed. And I’m reminded of the quote by Steve Jobs that you can’t just give people what they want, because by the time you do, they’ve changed their minds.  And just finding what people need and doing it is a bit reactive, it seems to me, regardless.  Even, to be honest, finding the company’s biggest barriers, and addressing them, isn’t a sufficient response.  It’s a good one, but it’s not enough.

Interestingly, an HR Director sitting next to me was nodding her head during that response about the data. So afterward I asked her what sort of data she had in mind. I asked about both survey data, and business metrics, and she indicated both (and anything else ;).  And I think that’s a good basis. But not a sufficient one.

If you look at most design in the real world, you’ll see that designers cycle between top-down and bottom-up.  It helps to check that you’re indeed draining the swamp, but also to ensure you’re not getting eaten by alligators.  And that’s the point I want to make.

I’m (obviously) a believer in frameworks. I want conceptual clarity. And I don’t want best practices, I want to abstract best principles and recontextualize them.  But I also believe you need to check how you’re going, and regularly test.  There are some overarching results that should be incorporated: culture, innovation, performance support, etc. And they can be instituted in ways that address problems yet also develop your ability.

So I do think collecting data on what’s going on, and identifying barriers is important.  But if you’re not also looking at the horizon and figuring out where you’re going in the longer term, you could be metaphorically ensuring no flat tires on a trip to the wrong neighborhood.  My short answer to their question would’ve been to document where you are, and where you want to get, and then figure out which of the top issues the data indicate sets you on a path to address the rest and build your capability and credibility.

15 March 2017

Technology or preparation?

Clark @ 8:10 am

In listening to a recent presentation on the trends affecting the workplace and HR, there was mention about how organizations were using more cognitive technology, AI, etc. and this was changing jobs. There were two additional notes.  First, these efforts aren’t (largely) leading to job losses, as these folks were being reskilled. Second, HR wasn’t involved in 65% of this.  That’s a concern. But one of the things I wondered was whether all the new, smart technology really would help as much as was intended or needed.

So here’s some context (I may have heard this in conjunction with an early experiment in using mobile devices to support drug trials).  Pharmaceutical companies are continually trying new drugs. One claim is that if people would follow their medicine regimens, many of these new drugs wouldn’t be necessary.  That is, the drugs are often times trying to require fewer doses with simpler instructions to make up for inappropriate use.

Likewise, the origin of performance support.  The question is where does the locus of responsibility belong. Interface design people were upset about performance support systems, arguing (correctly) that performance support was being used to make up for bad system design in the first place.  In fact, Don Norman’s book The Invisible Computer was about how interface design wasn’t being brought in early enough.  The point being that properly designed interfaces would incorporate support for our cognitive limitations inherently, not externally.

So, many of the things we’re doing are driven by bad implementation. And that’s what I started wondering: are we using smart technology to enhance an optimized workforce, or to make up for a lack of adequate preparation?  We could be putting in technology to make up for what we’ve been unsuccessful at doing through training and elearning (because we’re not doing that well).

To put it another way, would we get better returns applying what’s known about how we think, work, and learn than bringing in technology? Would adequate preparation be a more effective approach than throwing technology at the problem, at least in some of the cases?  There are strong reasons to use technology to do things we struggle at doing well, and in particular to augment us.  But perhaps a better investment, at least in some cases, would be to appropriately distribute tasks between the things our brains do well and what technology does better.

Let me be clear; there are technologies that will do things more reliably than humans, and do things humans would prefer not to. I’m all for the latter, at least ;). And we should optimize both technology and people.  I’m a fan of technology to augment us in ways we want to be augmented.  So my point is more to consider are we doing enough to prepare people and support them working together.  Your thoughts?

14 March 2017


Clark @ 8:01 am

There’s been a lot of talk about microlearning of late – definitions, calls for clarity, value propositions, etc – and I have to say that I’m afraid some of it (not what I’ve linked to) is a wee bit facile. Or, at least, conceptually unclear.  And I think that’s a problem. This came up again in a recent conversation, and I had a further thought (which of course I have to blog about ;).  It’s about how to do microdesign, that is, how to design micro learning. And it’s not trivial.

VirusSo one of the common views of micro learning is that it’s just in time. That is, if you need to know how to do something, you look it up.  And that’s just fine (as I’ve recently ranted). But it’s not learning. (In short: it’ll help you in the moment, but unless  you design it to support learning, it’s performance support instead).  You can call it Just In Time support, or microsupport,  but properly, it’s not micro learning.

The other notion is a learning that’s distributed over time. And that’s good.  But this takes a bit more thought. Think about it. If we want to systematically develop somebody over time, it’s not just a steady stream of ‘stuff’.  Ideally, it’s designed to optimally get there, minimizing the time taken on the part of the learner, and yet yield reliable improvements.  And this is complex.

In principle, it should be a steady development, that reactivates and extends learners capabilities in systematic ways. So, you still need your design steps, but you have to think about granularity, forgetting, reactivation, and development in a more fine-grained way.  What’s the minimum launch?  Can you do ought but make sure there’s an initial intro, concept, example, and a first practice?  Then, how much do we need to reactivate versus how much do we have to expand the capability in each iteration? How much is enough?  As Will Thalheimer says in his spaced learning report, the amount and duration of spacing depends on the complexity of the task and the frequency with which it’s performed.

When do you provide more practice, versus another example, versus a different model?  What’s the appropriate gap in complexity?  We’ll likely have to make our best guesses and tune, but we have to think consciously about it.  Just chunking up an existing course into smaller bits isn’t taking the decay of memory over time and the gradual expansion of capability. We have to design an experience!

Microlearning is the right thing to do, given our cognitive architecture. Only so much ‘strengthening’ of the links can happen in any one day, so to develop a full new capability will take time. And that means small bits over time makes sense. But choosing the right bits, the right frequency, the right duration, and the right ramp up in complexity, is non-trivial.  So let’s laud the movement, but not delude ourselves either that performance support or a stream of content is learning. Learning, that is systematically changing the reliable behavior of the most complex thing in the known universe, is inherently complex. We should take it seriously, and we can.

8 March 2017

A ‘Field of Dreams’ Industry

Clark @ 8:09 am

Corn fieldIn the movie, Field of Dreams, the character played by Kevin Costner is told “If you build it, they will come.” And I use an image from this movie to talk about learning culture, in that you can put all the elements of the performance ecosystem together, but if you work in a Miranda organization (where anything you say can and will be held against you), you won’t be able to tap into the power of the ecosystem because people won’t share. But it’s clear that the problem is worse; the evidence suggests that L&D overall is in a ‘Field of Dreams’ mentality.

A new report (in addition to the two I cited last week) documents the problems in L&D.  LinkedIn has released their Workplace Learning report, and one aspect stood out: Only 8% of CEOS see biz impact of L&D, only 4% see ROI.  And if you ask the top ways they evaluate their programs, the top five methods are subjective or anecdotal.  Which concurs with data a few years ago from ATD that the implementation of measurement according to the Kirkpatrick model dropped off drastically: while 96% were doing level 1, only 34% were doing level 2, and it went dramatically down from there. In short, L&D isn’t measuring.

Which means that there’s a very strong belief that: if we build it, it is good.  And that, to me, is a Field of Dreams mentality. It feels like the L&D industry is living in a world where they take orders and produce courses and trust that it all works.  I was pleased to hear that there’s testing, but there’s far too little measurement.

And, interestingly,  one other statistic struck me:”less than 1⁄4 are willing to recommend their program to peers”.  To put it another way, the majority of L&D are embarrassed by their outputs. This isn’t any better situation than the statistics I reported in my book calling for an L&D Revolution!

So, the complaints are predictable: too little money, too few people, and getting people to pay attention. Um, that comes when you’re demonstrably contributing to the organization. And that’s the promise I think we offer. L&D could and should be a big contributor to organizational success. If we were adequately addressing the optimizing performance side of the story, and ensuring  the continual innovation part as well, our value should and would be high.

It’s past time L&D moves beyond the ‘Field of Dreams’ status, and becomes a viable, and measurable contributor to organizational success. It’s doable, under real world constraints. It needs a plan, and some knowledge, but there’s a path forward.  So, are you ready to move out of the corn, and onto the road?

7 March 2017

Learning Design Insights

Clark @ 8:06 am

I attended a recent Meetup of the Bay Area Learning Design & Technology, and it led to some insights. As context, this is a group that meets in the evening maybe once or every other month or so.  It’s composed of students or new graduates as well as experienced-practitioners. The topic was Themes from a Hat (topics are polled and then separate discussions are held). I was tapped to host the Learning Design conversation (there were three others: LMS, Measurement, and Social Learning), and that meant that a subset of the group sat in on the discussion. We had four separate discussions for each group, so everyone had a chance to discuss every topic (except us topic hosts ;).

I’d chosen to start with 3 or four questions to prompt discussion:

  • What is good learning design?
  • Are you doing good learning design?
  • What are the barriers to good learning design?
  • What can we do to improve learning design?

In each case, we never got beyond the first question!  However, in the course of the discussions, we ended up talking quite a bit about the others.  I confess that I’m a just a wee bit opinionated and a stickler for conceptual clarity, so I probably spoke too much about important distinctions.  Yet there were also some valuable insights from the group.

First, it was a great group: enthusiastic, with a wide range of experience and backgrounds.  Folks had come into the field from different areas, everything from neuroscience to rabbinical practice!  And there were new students still in a Master’s program, job seekers, and those who were active in work.  Everyone contributed.  While it meant missing #lrnchat, it was worthwhile to have a different experience.  And everyone was kind enough to understood when I had to have my knee up as rehab (thanks!).

The responses to the first question were very interesting: what is good learning design?  While most everyone talked about features of the experience, we also were talking both the outcome and the process.  There even emerged a discussion about what learning was.  I offered  the traditional (behaviorist) description: a change in behavior in the same context, e.g. responding in a different (and presumably better) way.  I also mentioned my usual: learning is action and reflection; instruction is designed action and guided reflection.

One element that appeared in all four groups was ‘engaging’.  Exactly that word. (Only once did I feel compelled to mention that Engaging Learning was the title of my first book! ;)  There were other terms that encompassed it, including ‘experience’, ‘stimulating’, and ‘motivating’.  I was pleased to see the recognition of the value! To define it, discussion several times ranged across things like challenging practice and making it meaningful to learners.

Another element that reoccurred was ‘memorable’. It seemed what was meant was ‘retention’ (over time until needed) rather than the learning experience was worth recalling. This did bring up a discussion of what led to retention and a discussion of spaced learning.  That is, the fact that our brains can only strengthen associations so much in one day before sleep is needed. Slow learning.  Reactivation.

That same discussion came up with another repeated term: micro learning.  There appeared to be little differentiation between different interpretations of that term, so I made distinctions (as one does ;).  People too often use the term micro learning to mean looking something up just when needed (such as a video about how to do something).  And that’s valuable.  Yet it can  lead to successful performance in the moment without any learning (e.g. forgotten shortly thereafter). Which is fine, but it’s not learning! Microlearning might be some very small thing that can be learned right in the moment, but I reckon those are rare. What I really think micro learning could and should be is for spaced learning.  I think that to do that successfully is a non-trivial exercise, by the way.

We covered other topics about design, too.  In at least one group we talked about SME limitations and how to work with them. We also talked about the benefits of collaboration, and knowing your audience. And engaging the audience, making the learning meaningful to them and the organization. Minimalism came up in several different ways as well, not wasting the learner’s time.

One question had arisen in discussion with colleagues, and I took the opportunity in a couple groups to ask about their design practices. The question was how frequent was the process of giving a course demand to a designer and having them work alone from go to whoa.  It varied, but it seemed like there was some of that, there was also a fair bit of both collaboration at least at certain points, and some iterative testing. This was heartening to hear!  Doing  performance consulting and meaningful measurement, however, did appear somewhat challenging.

Overall, there’s an opportunity for some deeper science behind elearning, yet I was very heartened by the enthusiasm and that the design processes weren’t as ‘solitary waterfall’ as I feared. So, who’s up for a deeper learning science workshop?  ;)


1 March 2017

The change is here

Clark @ 8:04 am

For a number of years now (at least six), I’ve been beating the drum about the need for organizations to be prepared to address change. I’ve argued that things are happening faster, and that organizations are going to have to become more agile.  Now we’re seeing the evidence that the change has arrived.

a change purseTwo recent reports highlight the awareness. Gallup released a report on The State of the American Workplace recently that talks about the lack of engagement at work.  Deloitte also released a reportRewriting the rules in the digital age, that documents trends shifting the office environment.  With different perspectives, they both overlap in discussing the importance of culture.  It’s about creating an environment where people are empowered and enabled to contribute.

The Gallup report concludes with new behaviors for leaders and managers.  The first point for leaders is to use data and focus on culture. This, to me, involves leveraging technology and creating an environment. L&D could be leading using performance data captured through the ExperienceAPI, and facilitating the culture shift in courses and developing coaching. Their prescription for managers is to move to be coaches (and again, L&D should be both developing the skills and facilitating the processes).  And employees need to take ownership of their own development, which means L&D should focus on both meta-learning and ensuring resources (curation and creation) as well.

The second report is the more interesting one for me, because it’s about the trends and the ways to adapt.  The top two trends are the Organization of the future (c.f. The Workplace of the Future :) and Careers and learning.  The former is about redesigning organizations to become agile.  The latter is about a redefinition of learning.  They are a wee bit old-school, however, as while they do discuss innovation throughout, it isn’t a core focus and their definition of learning doesn’t include informal learning.  It’s still a top-down model.  But again, clear opportunities for L&D.

The key leverage points, to me, are learning and technology.  And here I mean more self-directed and collaborative learning conducted not formally, but facilitated. Social learning really can’t be top-down!  Important technologies are for communicating and collaborating, as well as tools to search and find resources.

And while the focus is on HR, including recruitment and leadership, I reckon that L&D should have a key place here, as indicated. The world’s changing, and L&D needs to adapt.  It’s time to innovate L&D to support organizational innovation. Are you ready?

28 February 2017

Revisiting the Ecosystem

Clark @ 8:10 am

One of the keys to the L&D revolution is recognizing the full performance ecosystem and the ways technology can support performance and development.  I’ve tried to represent and share my thinking via diagrams (including here, here, and here).  Prompted by a recent conversation, it was time to revisit the representation.

the performance ecosystemHere, I’m layering on several different ways to think about the goals, elements, etc.  (Given that this is an initial version, I’m kind of haphazard about labels like mechanisms, components, etc.)  To start, as I continually argue, at the bottom it’s about coupling optimal execution with continual innovation.  We need to do well those things we know we need to do, and then we need to continually improve.  I think that more and more of the optimal execution is getting automated.

On top of that, we have components – content and people – and the tactics to leverage them. We create or curate content (curation over creation!), and we develop relationships through community or find appropriate expertise through recommendations or search.   The goal is to have the right content and the right people ‘to hand’ to work with.

We develop content elements like performance support to support performing in the moment, and learning resources for self-directed learning over time.  We also use courses, whether individual or collaborative, to develop people (particularly when they’re novices). I’d put courses to the left and performance support to the right (above content) if we were talking about developing people (as I have here). So, for novices we first use courses, then practitioners need resources and coaching, and experts need interaction.  However, performance support is on one side on a continuum of mechanisms from performing, to developing, to innovation.  That’s what I’ve captured here.

Similarly, we use social elements like coaching, mentoring, and informal learning to develop ourselves and our organizations over time.  We use processes like consuming and completing to execute. Then we develop our ability to execute and the continue to learn through  communicating and collaborating.

There are lots of ways to represent the ecosystem, and given that elaboration theory tells us multiple representations help, here’s another stab. There are lots of elements to consider and fine tune, but I like to share my thinking to help it develop!  Overall, however, the opportunity is the chance to be contributing to organizational success in systematic and valuable ways. And that, I’ll suggest, is valuable. I welcome your thoughts.

22 February 2017

Another model for support

Clark @ 8:08 am

I was thinking about today’s post, wherein I was talking about a couple of packages that  might help organizations move forward. I was reflecting back on some previous posts about engagement models, and was reminded of a more recent one. And I realized this has played out in a couple of ways. And these approaches did provide away to  develop the organization’s abilities to develop better learning.  So this is another model for support for developing at least the learning side of the equation.

consulting talesIn a couple of instances , I’ve worked with organizations on a specific project, but in a particular way.  For each, my role was to lead the design. In one case, it was for a series of elearning modules. My role was to develop the initial template that the rest of the content fit.  Note that this isn’t a template for tarting it up, but instead a template about what the necessary elements and details around them were to ensure that the elements (e.g. intro, concept, practice, etc) both fit together and reflected the best learning science. In a more recent instance, it was on a specific focus, but there were several modules that used a similar structure.

What happens, importantly, is that by working collaboratively, we learn together.  Each of these organizations was in the business of developing content, but they were looking to raise their game. So, for instance, through leading the Workplace of the Future project but sharing the thinking behind it, by working out loud in that sense, it’s possible to develop a shared understanding.  And in the latter case, though they’d read the Deeper eLearning series, they got a lot more out of working it through with me.  (And, I’ll suggest, more than also reading the subsequent blog posts I wrote about the project.)

In each case, we created an overall template for the learning, and then detailed what the elements for the template were, and the critical components. When we applied it, usually with me doing it first, and then handing off. It’s really a Cognitive Apprenticeship approach.

So, it’s a slightly more involved approach, with a much more variable scope, but in conjunction with other approaches I’ve mentioned like critiquing content or design processes, it’s one way to get a jump on deeper learning science.  Just trying to think of models that can support improvement, and that’s what I’m trying to push.


21 February 2017

Support for moving forward

Clark @ 8:08 am

I have to admit I’ve been a bit surprised to see that movements towards improving elearning and learning strategy  haven’t had more impact. On the learning design side, e.g. the Serious eLearning Manifesto and our Future of Work project, it still seems there’s a focus on content presentation.  And similarly with learning strategy, so despite the Revolution, it doesn’t appear that there’s any big move in L&D to take a bigger perspective.  And my question is: “why not?”

So I’ve been trying to think what might be the barriers to move forward.  What could keep folks from at least taking initial steps?  Maybe folks are making moves, but I haven’t seen much indication.  So naturally I wondered what sort of support could be needed to move forward.

Perhaps it seems too overwhelming?  In the manifesto we did say we don’t expect people taking it all on at once, but we know folks sometimes have trouble breaking it down. Similarly, there’re a lot of components to the full performance ecosystem.  One possibility is that folks don’t know where to start.  I wrote sometime shortly after the manifesto’s release that the best place to start was with practice. And I’ve similarly argued that perhaps the best revolution catalyst is measurement. But maybe that’s too general?

So I wondered if perhaps some specific support would assist.  And so I’ve put together a package for each that’s an initial assessment to identify what’s working, what’s not, and from which to give some initial recommendations.  And I’ve tried to price them so that they’re not too dear, too hard to get approval for, but provide maximum value for minimal investment. Both are based upon the structure of previous successful engagements. (The learning strategy one is a little more because it’s a wee bit more complex. ;)  Both are also based upon frameworks I’ve developed for each:

elearning design is based upon deeper elearning and the leverage points in the design process

elearning strategy is based upon the performance ecosystem model and the implications for developing and delivering solutions.

In each I’m spending time beforehand reviewing materials, and then just two days on site to have some very targeted interviews and meetings.  The process involves talking to representative stakeholders and then working with a core team to work through the possibilities and prioritize them. It also includes an overview of the frameworks for each as a basis for a shared understanding.

The goal is to use an intensive investigation to identify what’s the current status, and the specific leverage points for immediate improvement and longer-term shifts. The output is a recommendation document that documents what’s working and where there are opportunities for improvement and what the likely benefits and costs are.

This isn’t available directly from the Quinnovation site: I’m starting here to talk to those who’ve been tracking the arguments. Maybe that’s the wrong starting point, but I’ve got to start somewhere. I welcome feedback on what else you might expect or want or what would help.

If you’d like to check out the two packages and start moving forward, have a look here and feel free to followup through the contact link.  You’ve got to have the 3 Rs: responsibility, resources, and resolve.  If I can help, glad to hear it.  If not, but there’s something else, let me know.  But I really do want to help move this industry forward, and I’ll continue to try to find ways to make that happen.  I invite you to join me!

Next Page »

Powered by WordPress