Clark Quinn's Learnings about Learning
(The Official Quinnovation blog)

8 November 2016

Demoing Out Loud (#wolweek and #DevLearn)

Clark @ 8:01 am

demofestlogoDemoing is a form of working out loud, right?  So I recently was involved in a project with Learnnovators where we designed some demo elearning (on the workplace of the future), and documented the thinking behind it. (The posts, published by Learning Solutions, are aggregated here.)  And now there’s be a chance to see it!  So, a couple of things to note.

First, this is Work Out Loud Week, and you should be seeing considerable attention to working out loud (aka Show Your Work). On principle, this is a good practice (and part of the Workplace of the Future, to be recursive).  I strongly recommend you have an eye out for events and posts that emerge.  There’s an official site for Work Out Loud week: Wolweek.com, and a twitter account: @Wolweek, and the hashtag #wolweek, so lots of ways to see what’s up. There are many benefits that accrue, not least because you need to create a culture where this practice can live long and prosper. Once it does, you see more awareness of activity, improved outcomes, and more.

Second, if you’ll be at DevLearn next week, I’ll be demoing the resulting course at the DemoFest (table 84). Come by and share your thoughts and/or find out what the goal was, the tradeoffs faced, and the resulting decisions made.   Of course, I encourage you to attend my workshop on elearning strategy and mythbusting session as well.  I’ll also be haunting the xAPI camp on the Tuesday. Hope to see you there!

2 November 2016

Strategy Sessions

Clark @ 8:06 am

In a previous post, I talked about a couple of instances where I worked with folks to let them ‘pick my brain’.  Those were about learning design in particular, but I’ve also worked with folks on strategy.  In these strategy sessions, things work a little differently.

So, in a typical strategy session, I prepare by looking at what they’ve done beforehand: any existing strategy documents. I also look at their current context, e.g. their business, market, customers, and products/services.  Finally, I look at their stated goals. I also explore their stated needs, and see if there are some they may be missing. Then we get together.

I typically will spend an hour or so going over some principles, so we have a shared framework to discuss against.  Then we brainstorm possible actions.  We’ve prepped for this, circulating the space for topics, so people have had time to identify their individual ideas. We get them documented, diverging before converging.  This may be a relatively large group, with representative stakeholders, but not so large that it can’t be managed.

Then, typically, a smaller group will take those ideas and prioritize them. To be clear, it’s informed by the context and infrastructure, so that the steps don’t just go from easier to harder, but it’s also about choosing steps that are strategic in securing credibility, building capacity, and leveraging other initiatives.  At the end, however, the team I’m working with has both a general roadmap and a specific plan.

And I think this is good. They’ve gotten some new and valuable ways to think about strategy, and custom advice, all in a very short engagement.  Sometimes it’s happened under the rubric of a mobile strategy, sometime’s it’s more general, but it always open eyes.  In two particular instances, I recall that the outcomes they ended up focusing on most weren’t even on their radar when they started!

consulttaleslogoThis is another instance of how folks can get high benefit from a small engagement.  Picking my brain can be valuable, but it’s not a fair engagement unless we make it mutually rewarding.  That’s not so hard to do, however.  Just so you know.

1 November 2016

Measuring Culture Change

Clark @ 8:04 am

Someone recently asked how you would go about measuring culture change, and I thought it’s an interesting question.  I’ll think ‘out loud’ about what might be the possibilities.  A learning culture is optimal for organizational innovation and agility, and it’s likely that not all elements are already in place.  So it’s plausible that you’d want to change, and if you do, you’d like to know how it’s going.

I think there are two major categories of measures: direct and indirect. Direct measures are ones that are impacting the outcomes you’re looking for, and indirect ones are steps along the way. Say, for instance, one desirable outcome of a learning culture would be, well, learning!  In this case, I mean the broad sense of learning: problems solved, new designs generated, research answering questions.  And indirect would be activity likely to yield that outcome. It could be engagement, or social interaction, or…  If we think of it in a Kirkpatrickian sense, we want to generate the indirect activity, and then measure the actual business impact.

What direct measures might there be?  I can see time to solve customer problems or problems solved per time.  And/or I might look at the rate of research questions answered.  Or the rate of new product generation.  Of course, if you were expecting other outcomes from your culture initiative, you’d naturally want aligned methods.   You could just be concerned with employee engagement, but I’m somewhat inclined (and willing to be wrong) to think about what the outcome of increased engagement would be.  It could also be retention or recruitment, if those are your goals.

These latter – engagement, recruitment, retention – are also possible indirect measures.  They indicate that things are better. Another indirect but more targeted measure might be the amount of collaboration happening (e.g. the use of collaboration tools) or even activity in social networks.  Those have been touted as the benefits of building community in social media, and those are worthwhile as well.

As a process, I think about what I might do before, during, and after any culture change initiative. I’d probably want a baseline to begin with, and then regular (if not continual) assessments as we go.  I’d take small steps, perhaps in one unit to begin, and monitor the impact, tuning as I go along.  Culture change is a journey, not an event, after all ;).


So ok, that’s off the top of my head, what say you?

26 October 2016

Pick my brain?

Clark @ 8:10 am

It’s a continual bane of a consultant’s existence that there are people who want to ‘pick your brain’.  It’s really asking for free consulting, and as such, it’s insulting. If you google the phrase, you’ll see how many people have indicated their problems with this! However, there are quite legitimate ways to pick my brain and I thought I’d mention a couple.  In both cases, I think were great engagements on both sides, high value for a reasonable investment.

Both in this case were for folks who develop content. In one case a not-for-profit, the other in the higher-ed space.  One had heard me speak about learning design, and one had heard about a workshop I’d given, but both contacted me. It is clear they realized that there’s value to them for having a scrutable learning design.

Content Review

So for the first one, they wanted some feedback on their design, and we arranged that I’d investigate a representative sample and provide feedback.  I went through systematically, taking notes, and compiled my observations into a report I sent them.  This didn’t take any investment in travel, but of course this feedback only points out what’s wrong, and doesn’t really provide mechanisms to improve.

I think they were surprised at the outcome, as the feedback was fairly robust.  They had a good design, largely, under the constraints, but there were some systematic design problems.  There were also some places where they’d managed to have some errors that had passed editorial (and this was only a small sample of a replicated model across a broad curriculum). To be fair, some of my complaints came from situations that were appropriate given some aspect of their context that I hadn’t known, but there were still a set of specific improvements I could recommend:

We found his comments insightful, and we look forward to implementing his expert suggestions to further improve of our product…

Learning Design Workshop

In this case, they’d heard about a workshop that I’d run on behalf of a client, and were interested in getting a similar experience. They had been designing content and had a great ability to track the results of their design and tweak, but really wanted a grounding in the underlying learning science.  I did review some sample content, but I also traveled to their site for a day and presented learning science details and workshopped the implications to their design process.

I went through details such as:

  • the importance and format for objectives,
  • SME limitations and tips how to work with them,
  • what makes effective practice,
  • the role and characteristics of concepts,
  • the details behind examples,
  • introduction and the role of emotions in the learning experience,
  • and more.

We went through examples of their content, and workshopped how they could adjust their design processes in pragmatic ways to instill the important details into their approach.  We also talked about ways to followup to not lose the momentum, but it was clear that this first visit was viewed favorable:

“…a walking encyclopedia of learning science… was able to respond to our inquiries with one well-researched perspective after another”.

consulttaleslogoSo, there are ways to pick my brain that provide high value with mutual benefit on each side.  Sure, you can read my blog or books, but sometimes you may want assistance in contextualizing it to your situation.  I encourage you to think of making an investment in quality.  These are about learning design, but I have some examples in strategy that I intend to share soon.  And more.  Stay tuned for more  ‘adventures in consulting’ tales that talk about ways in which a variety of needs are met.  Maybe one will resonate with you.  Of course, they’ll be mixed in with the regular reflections you’ve come to expect.

25 October 2016

Reconciling Activity and Decisions

Clark @ 8:04 am

In preparing to work with a client on developing their learning science understanding, I realized that I was using two representations about meaningful learner interaction that could be seen to be conflicting.  On the one hand I talk about using decisions as a basis for design, and on the other I refer to activity-based learning. And I have separate diagrams for each.  What was necessary was reconciling activity and decisions.

decision structureSo first, I talk about how we should be putting learners in the place to make decisions like they’ll need to be making after the learning experience.  We need to put them in a context, and then a particular event triggers the need for a decision. And then there are options for actions to take.  From the design point of view, there are correct answers, and wrong answers. These wrong answers, of course, should reflect where learners go wrong, reflecting reliable misconceptions. People don’t make errors randomly, mostly, but instead reflect inappropriate models being brought to bear.  And after their choices, there are consequences. I like for those consequences to be represented first, before the external feedback comes in.  This is just a better multiple choice question (or other canned interaction, but…

If the consequences of the initial decision  lead to a new situation and new decisions, now we’re talking a full scenario (whether implemented via branching or a full simulation-driven experience). Note that this is also the structure of a game.  In fact, this emerged from game designer Sid Maier’s quote about how games are a series of interesting decisions. Hence, serious games are a series of interesting and important decisions!  And, of course, this is programmed in advance (if we’re not talking about online role playing), so learners get feedback without necessary human intervention (though there’re powerful benefits to collecting discussion around the learning challenge).

activity structureHowever, I also have characterized learning as a series of activities, and those activities generate some work product and are (ideally) annotated with reflections. These products can (and arguably should be) cast as a response to some storyline that has them in a role related to the ones they’re likely to be in after the learning experience (even with some exaggeration).  These are complex outputs that are unlikely to be aut0-marked, and can be the basis of either or both of peer or mentor review.

The benefits here are that when we make the work product reflect real practice, we’re developing a suite of outcomes beyond just the content. We can require different formats –  presentations, spreadsheets, documents – developing modeling and communication skills. We can require group work, developing interpersonal skills. And we’re developing time management and project management skills as well. The tradeoff is the amount of mentoring time.

The challenge, then, is to identify the differences, and then think about when you’d use each.  The obvious difference is the simpler  structure for decisions.   While a branching scenario or programmed simulation/game is more than one decision, it’s still more linear than creating a product.  Developing a product is typically a series of many decisions! Hence the difficultly for auto-marking, but also the power for learning. It depends on the learning outcome you need, of course.  Now, too many activities in a short period of time could tax instructor time, so the best answer (as in many things) is to have a blend.

That’s my reconciliation of activity and decisions.  Does it make sense to you?  What did I miss?

19 October 2016

Self-regulation & PKM

Clark @ 8:05 am

I’m a fan of Harold Jarche’s Seek-Sense-Share (SSS) model for Personal Knowledge Mastery (PKM). I was also reading about self-regulated learning, and a proposed model for that. And I realized they could be related. Naturally, I created a diagram.

self-regulated-pkmTo start with, Harold’s model is oriented around coping with the information flow as a component of learning. He starts with seek, which could be either from a pre-arranged feed or the result of a specific search.  Then, the information is processed, by either or both of representation or active experimentation. Finally, information is shared, either broadcast through some form of post, or sent to a specific target. Note that the interpretations within the SSS boxes, e.g. feed and post, are mine, as I haven’t checked them with him.

Now, the model of self-regulated learning I was reading about talks about personal goals, learning actions, and evaluation.  It seems to me that learning goals sit outside of SSS, the SSS serves as the actions, and then evaluation comes after the action. Specifically, the goals inform the choice of feeds and any search, as well as the context for interpretation. Similarly, the results of personal sensing and the feedback from sharing inform the evaluation. And of course, the evaluation feeds new goals.

Two additional things. First, the encompassing notion is that this is under continual review.  That is you’re taking time to think about how you set goals, act (SSS), and evaluate.  Also, let me note that I think this makes sense both at the individual and organizational level. That is, organizations need to be explicit about their knowledge, experiments, and learnings.

The outside loop is likely to be an implicit part of PKM as well, but as indicated I haven’t had a chance to discuss it with Harold.  However, it’s useful for me to represent it this way as an experiment (see what I did there?). The question is, does this make sense for you?

18 October 2016

Next book?

Clark @ 8:01 am

The time has come to ask: what should be my next book?  I’ve written four so far:

Engaging Learning was something I felt was needed because people had written about the importance of games but no one was writing about how to design them, and I could.

Then, while I wanted to write about elearning strategy, my publisher wanted a book on mobile and I realized one was needed and the other likely candidates deferred.  Hence, Designing mLearning.

After that, my publisher’s sister company wanted a book on mlearning for higher education, and I ended up writing The Mobile Academy.

And then I finally convinced my publisher to let me write the elearning strategy book, and Revolutionize L&D was the result.

Let me be clear: I’m proud of each and every one of them.  I think each does the job it was designed to do, well.  However, each was written because either I or the publisher felt there was a need.  Which isn’t a bad thing, but it’s not the only approach. While I have some ideas, and of course it’s up to my publisher (unless I self-publish), it occurs to me to ask you what book I should write next.

So what is the next book you would like to see from me?  What book do you want or need that isn’t out there yet, and that is one that I am the person to write?  Here’s your chance; I’d greatly appreciate it if you took just a minute or two to give it some thought and write out your ideas.  What do you think?

13 October 2016

Infrastructure and integration

Clark @ 8:04 am

When I wrote the L&D Revolution book, I created a chart that documented the different stages that L&D could go through on the way.  I look at it again, and I see that I got (at least) one thing slightly off, as I talked about content and it’s more, it’s about integration and infrastructure.   And I reckon I should share my thinking, then and now.

The premise of the chart was that there are stages of maturity across the major categories of areas L&D should be aware of.  The categories were Culture, Formal Learning, Performance Support, eCommunity, Metrics, and Infrastructure. And for each of those, I had two subcategories.  And I mapped each at four stages of maturity.
Let me be clear, these were made up. I stuck to consistency in having two sub areas, and mapping to four stages of maturity.  I don’t think I was wrong, but this was an effort to raise awareness rather than be definitive. That said, I believed then and still now that the chart I created was roughly right.  With one caveat.

prethinkinginfrastructureIn the area of infrastructure, I focused largely on two sub categories, content models and semantics. I’ve been big on the ways that content could be used, from early work I did on content models that led to flexible delivery in an adaptive learning system, a context-sensitive performance support system, and a flexible content publishing system. I’ve subsequently written about content in a variety of places, attended an intelligent content conference, and have been generally advocating it’s time to do content like the big boys (read: web marketers).  And I think these areas are necessary, but not sufficient.

rethinkinginfrastructureI realize, as I review the chart for my upcoming strategy workshop at DevLearn, that I focused too narrowly.  Infrastructure is really about the technical sophistication (which includes content models & semantics, but also tracking and analytics) and integration of elements to create a true ecosystem.   So there’s more to the picture than just the content models and semantics.  Really, we want to be moving on both the sophistication of the model, and the technical underpinnings of the model.

We’ll be discussing this more in Las Vegas in November. And if you’re interested in beginning to offer a richer picture of learning and move  L&D to be a strategic contributor to the organization, this is the chance for a jump-start!


11 October 2016

Organizational Effectiveness for L&D?

Clark @ 8:03 am

Last week included an interesting series of events and conversations.  There was a formal event on innovation in learning technology (that was only partly so), and a presentation by a colleague. I also had a couple of conversations, one with said colleague following his more formal event, and another with another colleague before the initial event. And from that latter conversation came an interesting revelation.  The concept was Organizational Effectiveness, and the question is the relevance to L&D.

Now my colleague in the conversation that preceded the innovation event is wise, with a broad experience across HR.  And I was mentioning that it was hard to see a real sense of urgency in L&D around the problems I can’t help but notice. So, for instance,  I see much elearning that doesn’t reflect serious design.  And similarly, I see too many L&D organizations not looking beyond the course as their responsibility.

My colleague’s perspective was interesting. He opined that by and large, he saw the need for formal learning shrinking, and that more and more HR was focusing on providing self-learning resources instead of course.  While this doesn’t explain L&D complacency, it certainly would explain the lack of interest in investment in improvement. And while it should drive an interest in a broader performance ecosystem, that was seen as the responsibility from other areas.

In particularly, he talked about OD and OE.  Now, I’ve heard of Organizational Development, but have always seen it to be about change management (rightly or wrongly).  However, OE was new to me. He explained it was Organizational Effectiveness, and that intrigued me.  Effectiveness would certainly include the typical L&D role of learning, but also performance support.  It’s the ‘optimal execution’ side of my call for organizational success. If we remapped OD to be about continual innovation, and OE to be about optimal execution, we could have a value footing.

Interestingly, I see that the areas covered as components (not the first four, but these: Decision Making, Change & Learning, Group Effectiveness, & Self-Organizing & Adaptive Systems) seem to be intrinsic to the work I think organizations need (and have been arguing for L&D to take on). The point is, it doesn’t have to be L&D (and given the lack of awareness and urgency, maybe it can’t be). Maybe I need to look more closely at OE (and hope that it’s not just another rebranding).

The focus seems to be on not-for-profits, and achieving outcomes, not redefining them, so perhaps it’s too limited, but somehow I’d like to get a framing that starts generating action.  It’s time to start looking at working smarter and organizational meta-learning, because it appears to me that the need and opportunity are huge.  What are the leverage points?  I welcome any pointers, feedback, ideas, etc.!


6 October 2016

Because quality matters

Clark @ 8:06 am

I was reflecting on some of the actions my colleagues and I take.  These are, in particular, colleagues that have been contributing to the field for a long time, ones who know what they’re talking about and that I therefore respect.  I retweeted one who called for being careful of the source in message. I’ve supported another who has been on a crusade against myths.  And I joined with some others to promote quality elearning.  And it led me to wonder why.  Why care?  Why take risks and potentially upset people?  And I realized that it’s because I care; because quality matters.

So what do I mean?  For one, it’s about money.  To the extent that people are misled by claims, they can misinvest their money. They might be persuaded to buy products that can’t really deliver what’s promised. They might pursue programs that aren’t going to have a real effect.  We see this a lot, initiatives that don’t achieve the desired outcome. There are lots of ways to fail, but we do know lots about how to do it right. Yet we still see strategies limited to courses, and courses designed poorly, and thus money being wasted that could be doing good.

Yet really, it’s about people.  It’s about giving them the right tools to do their job, whether in their heads or in the world.  In particular, I think that a field that’s about learning is about helping people improve, and that’s a noble pursuit.  Yet, too much of what’s done is under-informed, if not outright misled.  We need to do  better.

And it’s about us.  If we’re to be professional, if we’re going to hold our heads high, if we’re going to have a meaningful impact, we have to do what’s right. And if we don’t know what that is, it’s incumbent on us to find out.  And be smart about it.  Be critical in our investigation of messages (including this one ;). We need to have enough background to be able to sift the wheat from the chaff.  And we need to continue to educate ourselves on the science that is behind what we do.  We need to be responsible.

We need to recognize that changing what is arguably the most complex thing in the known universe (the  mind) in persistent and predictable ways is not simple.  And simple solutions, while appealing, are not going to do the job.  They might meet one particular metric, but when you look at the big picture, aligning improvement with respect, you need to have a rich solution.

And I think awareness is growing. We are seeing more people interested in improving their learning designs despite considerable budget and time pressures.  And we’re seeing folks looking beyond the course, seeking to create an approach that’s broader and yet more focused on success.  Finally, we’re seeing people interested in improving. Which is the first step.

So you can continue to expect me to work for quality, and back up those who do likewise. Together, we might make this field one to be proud of.  I don’t think we’re quite there yet, but it’s within our reach. We can do this, and we should.  Are you with me?

If you’re interested in getting started, and would like some help to get going faster and further, get in touch!

« Previous PageNext Page »

Powered by WordPress