Learnlets
Clark Quinn's Learnings about Learning
(The Official Quinnovation blog)

30 November 2016

Content aggregation

Clark @ 8:05 am

At the recent DevLearn conference, I had a chance to present in xAPI Base Camp on Content Systems.  This is a topic I’ve been thinking about since working on an adaptive learning system back in 1999-2000. And I think it’s now an even bigger big opportunity.  In the course of responding to an interested attendee, I aggregated this list of my (non-blog) publications on the process, and I thought it might be useful to share them here (from most recent to earliest):

Talking about  lifecycle and opportunities in the Litmos blog

Perspective from visiting the Intelligent Content conference in eLearnMag

The possibilities and necessities of adapting to context in Learning Solutions magazine

New ways of looking at learning content in Learning Solutions magazine

Moving from text to experience for publishers in eLearnMag

A discussion paper including granularity and tagging in the IFETS journal

contentelementsWhat you see is a transition in focus.  From thinking about content objects at the right granularity and with the right tagging, I transition to thinking about strategy for content publishers. Ultimately, I have shifted to looking at the learning industry and opportunities to adapt learning to context (which includes location, current task, current role, and more).

While I think that adaptivity is a ways away, I also believe that the initial efforts in getting more rigorous about content strategy, engineering, and governance are a worthwhile investment now. The benefits are in less redundancy, tighter design, tracking, and greater flexibility.  The long term benefits will come from analytics, adaptivity, and contextualization.  I reckon that’s an opportunity that’s hard to ignore. I’ll suggest that it’s past time to be thinking about it, and time to start acting.  What about you?  Are you ready?

29 November 2016

Thoughts on Learning Design Strategy

Clark @ 8:10 am

learning design strategy questionsAt the DevLearn conference, I ran a Morning Buzz on Learning Design Strategy. I’m happy to say that the participants threw in lots of ideas, and I thought they were worth capturing. I started with a set of questions to address, so I’ll go through their comments in roughly that order (though we didn’t exactly follow this structure):

What is learning design strategy?

I had in mind the approach taken by an organization to their learning design.  Attendees suggested it’s your goals and approach, ensuring you are delivering effectively.  It’s also your review approach, and metrics.  These are all elements that indeed contribute to strategy.

What gaps are we seeing in learning design strategy?

The participants offered up a suite of places that were problems, including aligning with organizational goals and access to support measuring impact, both of which are indeed strategic issues.  They also raised problems with prioritization of the demands, the need to move beyond just courses, and the lack of learning design knowledge. All are real problems.

What do we need to be able to improve?

The audience offered up a number of suggestions.  For one, there was a desire for strategies (probably more tactics) for doing beyond ‘the event’.  Support for selling changes in the way of doing things was mentioned as well.  The shift to self-learning was mentioned, leading to concern over how to support this. Attendees also mentioned a need of awareness in designing ‘backwards‘. Finally, a culture of learning was expressly discussed.

What are possible solutions?

The participants offered a suite of suggestions. One was adopting a learn-apply-perform model, which another termed a learn-practice-demo. Both were getting at the need for active practice and an ability to actually demonstrate performance.  There was also a mention of looking to social networks and peer recommendations to lower the demand and facilitate self-learning.  A culture shift was suggested, supported by the methods used to teach! A final solution was to move quickly to mentoring, which implicitly suggests including mentoring in the design.

Steps to take to move forward?

I also wanted to know what how they might move forward, and what they needed.  Two clear suggestions emerged.  One was for examples, and I reckon both of better learning designs, and approaches to implement those learning designs in organizations.  The other was for tools. Here it was clear that they weren’t talking about tools to develop learning, but tools to support them doing good design, and following processes.

At the end I left with mixed feelings. It’s good to know that the problems I see are reflected in what the practitioners reports; we see the same problems  It’s also sad that these problems exist.  I do believe that the Serious eLearning Manifesto is one piece of support.  And I’ve written on practices (e.g. with SMEs), but it’s clear that some practical scaffolding would help. I’ve worked with a few organizations, but I’m struggling to find ways to help more.  (Maybe this is the topic of my next book?) So, what ideas do you have?

(I’m offering a webinar next week that will address these issues, if you’re serious about making changes.)

23 November 2016

Special Webinar on Learning Design Strategy

Clark @ 8:03 am

Learning, properly, should have an impact. It’s about systematically changing behavior, developing new skills to meet ever-changing needs. That’s why we invest in learning: training or elearning. If elearning doesn’t make an impactwho cares how accessible or affordable is seems? It’s actually undermining your goals for increasing expertise, effectiveness and productivity.

Too much of what is done under this umbrella isn’t sufficient. It’s quite simply not effective. Here’re some signs that your elearning might not be working:

  • Your learning unit develops courses on demand
  • You work from PDFs and PPTs to develop your course
  • You have a knowledge quiz at the end
  • You use libraries to add graphics & interactions
  • Your learners avoid the courses
  • You track completion
  • You evaluate impact by learner feedback

burning moneyIf you’re spending money to develop elearning, and it has any of the above features, there’s a strong chance you’re burning money.

The good news is that it’s not as hard to change as you think! The design processes you are using now, the ones that reflect the above issues, are not that far removed from ones that offer real outcomes. Yes, there are changes, but they’re changes within the process, not fundamental. You can be making changes that will make a marginal impact on the development measures, but a real impact on the learning outcomes.  It’s not trivial, but it is doable.

If this is of interest, I’m offering a free webinar to talk through the issues. It’s not for everyone: if you don’t have the authority or the resources to make a change, there are other posts that talk through the opportunities. This webinar is for those who really want to explore the possibilities. If that’s something you want to be thinking about, if you’d really like to consider consider how you get from here to there, I encourage you to keep reading.

Webinar: 7 Unbelievable eLearning Mistakes

Date: December 7

Time: 10 AM to 11AM Pacific

Via: Zoom Conference Service

Have you been concerned about your learning design: whether your designs are actually producing results? Ultimately, is your elearning changing behaviors, developing skills, and increasing capacity? And how do you know?  There are a lot of reasons to believe that most elearning is not delivering on the promise. And yet elearning has the potential to be a powerful tool for organizational excellence. The barriers are not unsurmountable; we know the problems, and steps to change.  However, as has been said: “When all is said and done, more is said than done.” I want to give you the chance to take the steps. In this complimentary webinar we’ll explore barriers to getting measurable results from elearning.

This isn’t your usual webinar, however. Here we are specifically talking to organizations that want their elearning to actually have an impact. If you don’t have the resources and position to make a change, this really isn’t for you. If you want your organization to take it more seriously, invite your boss ;).

This is for you, if:

  • You’re ready to look at your elearning with a serious eye
  • You want to ensure that you are getting value for your investment
  • You need to operate in the real world, under real constraints
  • You’re willing to invest for a real change that has impact

This isn’t for you if:

  • You are happy with the status quo
  • You haven’t the authority or the resources to make a change

If you are ready to take a serious look at your elearningI invite you to sign up.  There’s a limit to how many can attend, so please register early. When you register I’ll send you the necessary details. Hope to see you!

Just click here for the signup form.

18 November 2016

Karen McGrane #DevLearn Keynote Mindmap

Clark @ 12:27 pm

Karen McGrane closed the DevLearn conference talking about adaptive content. She had addressed mLearnCon in the past, a great presentation, so my expectations were high.  Plus, given that I riffed on background integration in my ELearning strategy pre-con and then content strategy as a session in the xAPI camp the next day, this is a talk I was eager to hear (congrats to the eLearning Guild for putting the topic on the table).

In this entertaining and illuminating session, she made the point that responsive is better than customizing to screen, and adapting is hard, so responsive is a good starting point.

Karen McGrane Keynote Mindmap

8 November 2016

Demoing Out Loud (#wolweek and #DevLearn)

Clark @ 8:01 am

demofestlogoDemoing is a form of working out loud, right?  So I recently was involved in a project with Learnnovators where we designed some demo elearning (on the workplace of the future), and documented the thinking behind it. (The posts, published by Learning Solutions, are aggregated here.)  And now there’s be a chance to see it!  So, a couple of things to note.

First, this is Work Out Loud Week, and you should be seeing considerable attention to working out loud (aka Show Your Work). On principle, this is a good practice (and part of the Workplace of the Future, to be recursive).  I strongly recommend you have an eye out for events and posts that emerge.  There’s an official site for Work Out Loud week: Wolweek.com, and a twitter account: @Wolweek, and the hashtag #wolweek, so lots of ways to see what’s up. There are many benefits that accrue, not least because you need to create a culture where this practice can live long and prosper. Once it does, you see more awareness of activity, improved outcomes, and more.

Second, if you’ll be at DevLearn next week, I’ll be demoing the resulting course at the DemoFest (table 84). Come by and share your thoughts and/or find out what the goal was, the tradeoffs faced, and the resulting decisions made.   Of course, I encourage you to attend my workshop on elearning strategy and mythbusting session as well.  I’ll also be haunting the xAPI camp on the Tuesday. Hope to see you there!

2 November 2016

Strategy Sessions

Clark @ 8:06 am

In a previous post, I talked about a couple of instances where I worked with folks to let them ‘pick my brain’.  Those were about learning design in particular, but I’ve also worked with folks on strategy.  In these strategy sessions, things work a little differently.

So, in a typical strategy session, I prepare by looking at what they’ve done beforehand: any existing strategy documents. I also look at their current context, e.g. their business, market, customers, and products/services.  Finally, I look at their stated goals. I also explore their stated needs, and see if there are some they may be missing. Then we get together.

I typically will spend an hour or so going over some principles, so we have a shared framework to discuss against.  Then we brainstorm possible actions.  We’ve prepped for this, circulating the space for topics, so people have had time to identify their individual ideas. We get them documented, diverging before converging.  This may be a relatively large group, with representative stakeholders, but not so large that it can’t be managed.

Then, typically, a smaller group will take those ideas and prioritize them. To be clear, it’s informed by the context and infrastructure, so that the steps don’t just go from easier to harder, but it’s also about choosing steps that are strategic in securing credibility, building capacity, and leveraging other initiatives.  At the end, however, the team I’m working with has both a general roadmap and a specific plan.

And I think this is good. They’ve gotten some new and valuable ways to think about strategy, and custom advice, all in a very short engagement.  Sometimes it’s happened under the rubric of a mobile strategy, sometime’s it’s more general, but it always open eyes.  In two particular instances, I recall that the outcomes they ended up focusing on most weren’t even on their radar when they started!

consulttaleslogoThis is another instance of how folks can get high benefit from a small engagement.  Picking my brain can be valuable, but it’s not a fair engagement unless we make it mutually rewarding.  That’s not so hard to do, however.  Just so you know.

1 November 2016

Measuring Culture Change

Clark @ 8:04 am

Someone recently asked how you would go about measuring culture change, and I thought it’s an interesting question.  I’ll think ‘out loud’ about what might be the possibilities.  A learning culture is optimal for organizational innovation and agility, and it’s likely that not all elements are already in place.  So it’s plausible that you’d want to change, and if you do, you’d like to know how it’s going.

I think there are two major categories of measures: direct and indirect. Direct measures are ones that are impacting the outcomes you’re looking for, and indirect ones are steps along the way. Say, for instance, one desirable outcome of a learning culture would be, well, learning!  In this case, I mean the broad sense of learning: problems solved, new designs generated, research answering questions.  And indirect would be activity likely to yield that outcome. It could be engagement, or social interaction, or…  If we think of it in a Kirkpatrickian sense, we want to generate the indirect activity, and then measure the actual business impact.

What direct measures might there be?  I can see time to solve customer problems or problems solved per time.  And/or I might look at the rate of research questions answered.  Or the rate of new product generation.  Of course, if you were expecting other outcomes from your culture initiative, you’d naturally want aligned methods.   You could just be concerned with employee engagement, but I’m somewhat inclined (and willing to be wrong) to think about what the outcome of increased engagement would be.  It could also be retention or recruitment, if those are your goals.

These latter – engagement, recruitment, retention – are also possible indirect measures.  They indicate that things are better. Another indirect but more targeted measure might be the amount of collaboration happening (e.g. the use of collaboration tools) or even activity in social networks.  Those have been touted as the benefits of building community in social media, and those are worthwhile as well.

As a process, I think about what I might do before, during, and after any culture change initiative. I’d probably want a baseline to begin with, and then regular (if not continual) assessments as we go.  I’d take small steps, perhaps in one unit to begin, and monitor the impact, tuning as I go along.  Culture change is a journey, not an event, after all ;).

 

So ok, that’s off the top of my head, what say you?

26 October 2016

Pick my brain?

Clark @ 8:10 am

It’s a continual bane of a consultant’s existence that there are people who want to ‘pick your brain’.  It’s really asking for free consulting, and as such, it’s insulting. If you google the phrase, you’ll see how many people have indicated their problems with this! However, there are quite legitimate ways to pick my brain and I thought I’d mention a couple.  In both cases, I think were great engagements on both sides, high value for a reasonable investment.

Both in this case were for folks who develop content. In one case a not-for-profit, the other in the higher-ed space.  One had heard me speak about learning design, and one had heard about a workshop I’d given, but both contacted me. It is clear they realized that there’s value to them for having a scrutable learning design.

Content Review

So for the first one, they wanted some feedback on their design, and we arranged that I’d investigate a representative sample and provide feedback.  I went through systematically, taking notes, and compiled my observations into a report I sent them.  This didn’t take any investment in travel, but of course this feedback only points out what’s wrong, and doesn’t really provide mechanisms to improve.

I think they were surprised at the outcome, as the feedback was fairly robust.  They had a good design, largely, under the constraints, but there were some systematic design problems.  There were also some places where they’d managed to have some errors that had passed editorial (and this was only a small sample of a replicated model across a broad curriculum). To be fair, some of my complaints came from situations that were appropriate given some aspect of their context that I hadn’t known, but there were still a set of specific improvements I could recommend:

We found his comments insightful, and we look forward to implementing his expert suggestions to further improve of our product…

Learning Design Workshop

In this case, they’d heard about a workshop that I’d run on behalf of a client, and were interested in getting a similar experience. They had been designing content and had a great ability to track the results of their design and tweak, but really wanted a grounding in the underlying learning science.  I did review some sample content, but I also traveled to their site for a day and presented learning science details and workshopped the implications to their design process.

I went through details such as:

  • the importance and format for objectives,
  • SME limitations and tips how to work with them,
  • what makes effective practice,
  • the role and characteristics of concepts,
  • the details behind examples,
  • introduction and the role of emotions in the learning experience,
  • and more.

We went through examples of their content, and workshopped how they could adjust their design processes in pragmatic ways to instill the important details into their approach.  We also talked about ways to followup to not lose the momentum, but it was clear that this first visit was viewed favorable:

“…a walking encyclopedia of learning science… was able to respond to our inquiries with one well-researched perspective after another”.

consulttaleslogoSo, there are ways to pick my brain that provide high value with mutual benefit on each side.  Sure, you can read my blog or books, but sometimes you may want assistance in contextualizing it to your situation.  I encourage you to think of making an investment in quality.  These are about learning design, but I have some examples in strategy that I intend to share soon.  And more.  Stay tuned for more  ‘adventures in consulting’ tales that talk about ways in which a variety of needs are met.  Maybe one will resonate with you.  Of course, they’ll be mixed in with the regular reflections you’ve come to expect.

19 October 2016

Self-regulation & PKM

Clark @ 8:05 am

I’m a fan of Harold Jarche’s Seek-Sense-Share (SSS) model for Personal Knowledge Mastery (PKM). I was also reading about self-regulated learning, and a proposed model for that. And I realized they could be related. Naturally, I created a diagram.

self-regulated-pkmTo start with, Harold’s model is oriented around coping with the information flow as a component of learning. He starts with seek, which could be either from a pre-arranged feed or the result of a specific search.  Then, the information is processed, by either or both of representation or active experimentation. Finally, information is shared, either broadcast through some form of post, or sent to a specific target. Note that the interpretations within the SSS boxes, e.g. feed and post, are mine, as I haven’t checked them with him.

Now, the model of self-regulated learning I was reading about talks about personal goals, learning actions, and evaluation.  It seems to me that learning goals sit outside of SSS, the SSS serves as the actions, and then evaluation comes after the action. Specifically, the goals inform the choice of feeds and any search, as well as the context for interpretation. Similarly, the results of personal sensing and the feedback from sharing inform the evaluation. And of course, the evaluation feeds new goals.

Two additional things. First, the encompassing notion is that this is under continual review.  That is you’re taking time to think about how you set goals, act (SSS), and evaluate.  Also, let me note that I think this makes sense both at the individual and organizational level. That is, organizations need to be explicit about their knowledge, experiments, and learnings.

The outside loop is likely to be an implicit part of PKM as well, but as indicated I haven’t had a chance to discuss it with Harold.  However, it’s useful for me to represent it this way as an experiment (see what I did there?). The question is, does this make sense for you?

13 October 2016

Infrastructure and integration

Clark @ 8:04 am

When I wrote the L&D Revolution book, I created a chart that documented the different stages that L&D could go through on the way.  I look at it again, and I see that I got (at least) one thing slightly off, as I talked about content and it’s more, it’s about integration and infrastructure.   And I reckon I should share my thinking, then and now.

The premise of the chart was that there are stages of maturity across the major categories of areas L&D should be aware of.  The categories were Culture, Formal Learning, Performance Support, eCommunity, Metrics, and Infrastructure. And for each of those, I had two subcategories.  And I mapped each at four stages of maturity.
Let me be clear, these were made up. I stuck to consistency in having two sub areas, and mapping to four stages of maturity.  I don’t think I was wrong, but this was an effort to raise awareness rather than be definitive. That said, I believed then and still now that the chart I created was roughly right.  With one caveat.

prethinkinginfrastructureIn the area of infrastructure, I focused largely on two sub categories, content models and semantics. I’ve been big on the ways that content could be used, from early work I did on content models that led to flexible delivery in an adaptive learning system, a context-sensitive performance support system, and a flexible content publishing system. I’ve subsequently written about content in a variety of places, attended an intelligent content conference, and have been generally advocating it’s time to do content like the big boys (read: web marketers).  And I think these areas are necessary, but not sufficient.

rethinkinginfrastructureI realize, as I review the chart for my upcoming strategy workshop at DevLearn, that I focused too narrowly.  Infrastructure is really about the technical sophistication (which includes content models & semantics, but also tracking and analytics) and integration of elements to create a true ecosystem.   So there’s more to the picture than just the content models and semantics.  Really, we want to be moving on both the sophistication of the model, and the technical underpinnings of the model.

We’ll be discussing this more in Las Vegas in November. And if you’re interested in beginning to offer a richer picture of learning and move  L&D to be a strategic contributor to the organization, this is the chance for a jump-start!

 

Next Page »

Powered by WordPress