Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Search Results for: top tools

Reflections on 15 years

31 December 2014 by Clark 2 Comments

For Inside Learning & Technologies 50th edition, a number of us were asked to provide reflections on what has changed over the past 15 years.  This was pretty much the period in which I’d returned to the US and took up with what was kind of a startup and led to my life as a consultant.  As an end of year piece, I have permission to post that article here:

15 years ago, I had just taken a step away from academia and government-sponsored initiatives to a new position leading a team in what was effectively a startup. I was excited about the prospect of taking the latest learning science to the needs of the corporate world. My thoughts were along the lines of “here, where we have money for meaningful initiatives, surely we can do something spectacular”. And it turns out that the answer is both yes and no.

The technology we had then was pretty powerful, and that has only increased in the past 15 years. We had software that let us leverage the power of the internet, and reasonable processing power in our computers. The Palm Pilot had already made mobile a possibility as well. So the technology was no longer a barrier, even then.

And what amazing developments we have seen! The ability to create rendered worlds accessible through a dedicated application and now just a browser is truly an impressive capability. Regardless of whether we overestimated the value proposition, it is still quite the technology feat. And similarly, the ability to communicate via voice and video allows us to connect people in ways once only dreamed of.

We also have rich new ways to interact from microblogs to wikis (collaborative documents). These capabilities are improved by transcending proximity and synchronicity. We can work together without worrying about where the solution is hosted, or where our colleagues are located. Social media allow us to tap into the power of people working together.

The improvements in mobile capabilities are also worth noting. We have gone from hype to hyphens, where a limited monochrome handheld has given way to powerful high-resolution full-color multi-channel always-connected sensor-rich devices. We can pretty much deliver anything anywhere we want, and that fulfills Arthur C. Clarke’s famous proposition that a truly advanced technology is indistinguishable from magic.

Coupled with our technological improvements are advances in our understanding of how we think, work, and learn. We now have recognition about how we act in the world, about how we work with others, and how we best learn. We have information age understandings that illustrate why industrial age methods are not appropriate.

It is not truly new, but reaching mainstream awareness in the last decade and more is the recognition that the model of our thinking as formal and logical is being updated. While we can work in such ways, it is the exception rather than the rule. Such thinking is effortful and it turns out both that we avoid it and there is a limit to how much deep thinking one can do in a day. Instead, we use our intuition beyond where we should, and while this is generally okay, it helps to understand our limitations and design around them.

There is also a spreading awareness of how much our thinking is externalized in the world, and how much we use technology to support us being effective. We have recognized the power of external support for thinking, through tools such as checklists and wizards. We do this pretty naturally, and the benefits from good design of technology greatly facilitate our ability to think.

There is also recognition that the model of individual innovation is broken, and that working together is far superior to working alone. The notion of the lone genius disappearing and coming back with the answer has been replaced by iterations on top of previous work by teams. When people work together in effective ways, in a supportive environment, the outcomes will be better. While this is not easy to effect in many circumstances, we know the practices and culture elements we need, and it is our commitment to get there, not our understanding, that is the barrier.

Finally, our approaches to learning are better informed now. We know that being emotionally engaged is a valued component in moving to learning experience design. We understand the role of models in supporting more flexible performance. We also have evidence of the value of performing in context. It is not news that information dump and knowledge test do not lead to meaningful skill acquisition, and it is increasingly clear that meaningful practice can. It is also increasingly clear that, as things move faster, meaningful skills – the ability to make better decisions – is what is going to provide the sustainable differentiator for organizations.

So imagine my dismay in finding that the approaches we are using in organizations are largely still rooted in approaches from yesteryear. While we have had rich technology opportunities to combine with our enlightened understanding, that is not what we are seeing. What we see is still expectations that it is done in-the-head, top-down, with information dump and meaningless assessment that is not tied to organizational outcomes. And while it is not working, demonstrably, there seems little impetus to change.

Truly, there has been little change in our underlying models in 15 years. While the technology is flashier, the buzz words have mutated, and some of the faces have changed, we are still following myths like learning styles and generational differences, we are still using ‘spray and pray’ methods in learning, we are still not taking on performance support and social learning, and perhaps most distressingly, we are still not measuring what matters.

Sure, the reasons are complex. There are lots of examples of the old approaches, the tools and practices are aligned with bad learning practices, the shared metrics reflect efficiency instead of effectiveness, … the list goes on. Yet a learning & development (L&D) unit unengaged with the business units it supports is not sustainable, and consequently the lack of change is unjustifiable.

And the need is now more than ever. The rate of change is increasing, and organizations now have more need to not just be effective, but they have to become agile. There is no longer time to plan, prepare, and execute, the need is to continually adapt. Organizations need to learn faster than the competition.

The opportunities are big. The critical component for organizations to thrive is to couple optimal execution (the result of training and performance support) with continual innovation (which does not come from training). Instead, imagine an L&D unit that is working with business units to drive interventions that affect key KPIs. Consider an L&D unit that is responsible for facilitating the interactions that are leading to new solutions, new products and services, and better relationships with customers. That is the L&D we need to see!

The path forward is not easy but it is systematic and doable. A vision of a ‘performance ecosystem‘ – a rich suite of tools to support success that surround the performer and are aligned with how they think, work, and learn – provides an endpoint to start towards. Every organization‘s path will be different, but a good start is to start doing formal learning right, begin looking at performance support, and commence working on the social media infrastructure.

An associated focus is building a meaningful infrastructure (hint: one all-singing all-dancing LMS is not the answer). A strategy to get there is a companion effort. And, ultimately a learning culture will be necessitated. Yet these components are not just a necessary component for L&D, they are the necessary components for a successful organization, one that can be agile enough to adapt to the increasing rate of change we are facing.

And here is the first step: L&D has to become a learning organization. Mantras like ‘work out loud’, ‘fail fast’, and ‘reflect’ have to become part of the L&D culture. L&D has to start experimenting and learning from the experiments. Let us ensure that the past 15 years are a hibernation we emerge from, not the beginning of the end.

Here’s to change for the better.  May 2015 be the  best year yet!

Types of meaningful processing

14 October 2014 by Clark 1 Comment

In an previous post, I argued for different types and ratios for  worthwhile learning activities. I’ve been thinking about this (and working on it) quite a bit lately. I know there are other resources that I should know about (pointers welcome), but I’m currently wrestling with several types of situations and wanted to share my thinking. This is aside from scenarios/simulations (e.g. games) that are the first, best, learning practice you can engage in, of course. What I’m looking for is ways to get learners to do processing in ways that will assist their ability to  do.  This isn’t recitation, but application.

So one situation is where the learner has to execute  the right procedure. This seems easy, but the problem is that they’re liable to get it right  in practice.  The problem is that they still can get it wrong when in real situations. An idea I had heard of before, but was reiterated through Socratic Arts  (Roger Schank & cohorts) was to have learners observe (e.g. video) of someone performing it and identifying whether it was right or not. This is a more challenging task than  just doing it right for many routine but important tasks (e.g. sanitation). It has learners monitor the process, and then they can turn that on themselves to become self-monitoring.  If the selection of mistakes is broad enough, they’ll have experience that will transfer to their whole performance.

Another task that I faced earlier was the situation where people had to interpret guidelines to make a decision. Typically, the extreme cases  are obvious, and instructors argue that they all are, but in reality there are many ambiguous situations.  Here, as I’ve argued before, the thing to do is have folks work in groups and be presented with increasingly ambiguous situations. What emerges from the discussion is usually a rich unpacking of the elements.  This processing of the rules in context exposes the underlying issues in important ways.

Another type of task is helping people understand applying models to make decisions. Rather than present them with the models, I’m again looking for more meaningful processing.  Eventually I’ll expect learners to make decisions with them, but as a scaffolding step, I’m asking them to interpret the models in terms of their recommendations for use.  So before I have them engage in scenarios, I’ll ask them to use the models to create, say, a guide to how to use that information. To diagnose, to remedy, to put in place initial protections.  At other times, I’ll have them derive subsequent processes from the theoretical model.

One other example I recall came from a paper  that Tom Reeves wrote (and I can’t find) where he had learners pick from a number of options that indicated problems or actions to take. The interesting difference was then there was a followup question about why. Every choice was two stages: decision and then rationale. This is a very clever way to see if they’re not just getting the right answer but can understand why it’s right.  I wonder if any of the authoring tools on the market right now include such a template!

I know there are  more categories of learning and associated tasks that require useful processing (towards do, not  know, mind you ;), but here are a couple that are ‘top of mind’ right now. Thoughts?

 

 

Learning in 2024 #LRN2024

17 September 2014 by Clark 1 Comment

The eLearning Guild is celebrating it’s 10th year, and is using the opportunity to reflect on what learning will look like 10 years from now.  While I couldn’t participate in the twitter chat they held, I optimistically weighed in: “learning in 2024 will look like individualized personal mentoring via augmented reality, AI, and the network”.  However, I thought I would elaborate in line with a series of followup posts leveraging the #lrn2024 hashtag.  The twitter chat had a series of questions, so I’ll address them here (with a caveat that our learning really hasn’t changed, our wetware hasn’t evolved in the past decade and won’t again in the next; our support of learning is what I’m referring to here):

1. How has learning changed in the last 10 years (from the perspective of the learner)?

I reckon the learner has seen a significant move to more elearning instead of an almost complete dependence on face-to-face events.  And I reckon most learners have begun to use technology in their own ways to get answers, whether via the Google, or social networks like FaceBook and LinkedIn.  And I expect they’re seeing more media such as videos and animations, and may even be creating their own. I also expect that the elearning they’re seeing is not particularly good, nor improving, if not actually decreasing in quality.  I expect they’re seeing more info dump/knowledge test, more and more ‘click to learn more‘, more tarted-up drill-and-kill.  For which we should apologize!

2.  What is the most significant change technology has made to organizational learning in the past decade?

I reckon there are two significant changes that have happened. One is rather subtle as yet, but will be profound, and that is the ability to track more activity, mine more data, and gain more insights. The ExperienceAPI coupled  with analytics is a huge opportunity.  The other is the rise of social networks.  The ability to stay more tightly coupled with colleagues, sharing information and collaborating, has really become mainstream in our lives, and is going to have a big impact on our organizations.  Working ‘out loud’, showing our work, and working together is a critical inflection point in bringing learning back into the workflow in a natural way and away from the ‘event’ model.

3.  What are the most significant challenges facing organizational learning today?

The most significant change is the status quo: the belief that an information oriented event model has any relationship to meaningful outcomes.  This plays out in so many ways: order-taking for courses, equating information with skills, being concerned with speed and quantity instead of quality of outcomes, not measuring the impact, the list goes on.   We’ve become self-deluded that an LMS and a rapid elearning tool means you’re doing something worthwhile, when it’s profoundly wrong.  L&D needs a revolution.

4.  What technologies will have the greatest impact on learning in the next decade? Why?

The short answer is mobile.  Mobile is the catalyst for change. So many other technologies go through the hype cycle: initial over-excitement, crash, and then a gradual resurgence (c.f. virtual worlds), but mobile has been resistant for the simple reason that there’s so much value proposition.  The cognitive augmentation that digital technology provides, available whenever and wherever you are clearly has benefits, and it’s not courses!  It will naturally incorporate augmented reality with the variety of new devices we’re seeing, and be contextualized as well.  We’re seeing a richer picture of how technology can support us in being effective, and L&D can facilitate these other activities as a way to move to a more strategic and valuable role in the organization.  As above, also new tracking and analysis tools, and social networks.  I’ll add that simulations/serious games are an opportunity that is yet to really be capitalized on.  (There are reasons I wrote those books :)

5.  What new skills will professionals need to develop to support learning in the future?

As I wrote  (PDF), the new skills that are necessary fall into two major categories: performance consulting and interaction facilitation.  We need to not design courses until we’ve ascertained that no other approach will work, so we need to get down to the real problems. We should hope that the answer comes from the network when it can, and we should want to design performance support solutions  if it can’t, and reserve courses for only when it absolutely has to be in the head. To get good outcomes from the network, it takes facilitation, and I think facilitation is a good model for promoting innovation, supporting coaching and mentoring, and helping individuals develop self-learning skills.  So the ability to get those root causes of problems, choose between solutions, and measure the impact are key for the first part, and understanding what skills are needed by the individuals (whether performers or mentors/coaches/leaders) and how to develop them are the key new additions.

6.  What will learning look like in the year 2024?

Ideally, it would look like an ‘always on’ mentoring solution, so the experience is that of someone always with you to watch your performance and provide just the right guidance to help you perform in the moment and develop you over time. Learning will be layered on to your activities, and only occasionally will require some special events but mostly will be wrapped around your life in a supportive way.  Some of this will be system-delivered, and some will come from the network, but it should feel like you’re being cared for  in the most efficacious way.

In closing,  I note that, unfortunately,my Revolution book and the Manifesto were both driven by a sense of frustration around the lack of meaningful change in L&D. Hopefully, they’re riding or catalyzing the needed change, but in a cynical mood I might believe that things won’t change near as much as I’d hope. I also remember a talk (cleverly titled:  Predict Anything but the Future  :) that said that the future does tend  to come as an informed basis would predict  with an unexpected twist,  so it’ll be interesting to discover what that twist will be.

Learning Engineering

10 September 2014 by Clark Leave a Comment

Last week I had the opportunity to attend the inaugural meeting of the Global Learning Council.  While not really global in either sense (little representation from overseas nor from segments other than higher ed), it was a chance to refresh myself in some rigor around learning sciences. And one thing that struck me was folks talking about learning engineering.

If we take the analogy from regular science and engineering, we are talking about taking the research from the learning sciences, and applying it to the design of solutions.  And this sounds like a good thing, with some caveats.  When talking about the Serious eLearning Manifesto, for example, we’re talking about principles that should be embedded in  your learning design approach.

While the intention was not to provide coverage of learning science, several points emerged at one point or another as research-based outcomes to be desired. For one, the value of models in learning.  Another was, of course, the value of spacing practice. The list goes on.  The focus of the engineering, however, is different.

While it wasn’t an explicit topic of the talk, it emerged in several side conversations, but the focus is on design processes and tools that increase the likelihood of creating  effective learning practices.  This includes doing a suitable job of creating aligned outcomes through processes of working with SMEs, identifying misconceptions to be addressed, ensuring activities are designed  that have learners appropriately processing and applying information, appropriate spread of examples, and more.

Of course, developing an accurate course for any topic is a thorough exercise.  Which is desirable, but not always pragmatic.  While the full rigor of science would go as far as adaptive intelligent tutoring systems, the amount of work to do so can be prohibitive under pragmatic constraints.  It takes a high importance and large potential audience to do this for other than research purposes.

In other cases, we use heuristics.  Sometimes we go too far; so just dumping information and adding a quiz is often seen, though that’s got little likelihood of having any impact.  Even if we do create an appropriate practice, we might only have learners practice until they get it right, not until they can’t get it wrong.

Finding the balance point is an ongoing effort. I reckon that the elements of good design is a starting point, but you need processes that are manageable, repeatable, and scalable.  You need structures to help, including representations  that have support for identifying key elements and make it difficult to ignore the important elements.  You ideally have aligned tools that make it easy to do the right things.

And if this is what Learning Engineering can be, systematically applying learning science to design, I reckon there’s also a study of learning science engineering, aligning not just the learning, but the design process, with how we think, work, and learn.  And maybe then there’s a learning architecture as well – where just as an architect  designs the basic look and feel of the halls &  rooms and the engineers build them – that designs the curriculum approach and the pedagogy, but the learning engineers follow through on those principles for developing courses.

Is learning engineering an alternative to  instructional design?   I’m wondering if the focus on engineering rather than design (applied science, rather than art) and learning rather than instruction (outcomes, not process), is a better characterization.  What do you think?

Resources before courses

3 July 2014 by Clark Leave a Comment

In the course of answering a question in an interview, I realized a third quip to complement two recent ones. The earliest one (not including my earlier ‘Quips‘) was “curation trumps creation”, about how you shouldn’t spend the effort to create new resources if you’ve already got them.  The second one was “from the network, not your work”, about how if your network can have the answer, you should let it.  So what’s this new one?

While I’ve previously argued that good learning design shouldn’t take longer, that was assuming good design in the first place: that you did an analysis, and concept and example design and presentation, and practice, not just dumping a quiz on top of content.  However, doing real design, good or bad,  should take time.  And if it’s about knowledge, not skills, a course doesn’t make sense. In short, doing courses should be reserved for when they are  really needed.

Too often, we’re making courses  trying to get knowledge into people’s heads, which usually isn’t a good idea, since our brains aren’t good at remembering rote information.  There are times when it’s necessary, rarely  (e.g. medical vocabulary), but we resort to that solution too often as course tools are our only hammer.  And it’s wrong.

We  should be trying to put information in the world, and reserve the hard work of course building when it’s proprietary skills sets we’re developing. If someone else has done it, don’t feel like you have to use your resources to do it  again, use your resources to go meet other needs: more performance support, or facilitating cooperation and communication.

So, for both principled and pragmatic reasons, you should be looking to resources as a solution before you turn to courses. On principle, they meet different needs, and you shouldn’t use the course when (most) needs can be met with resources. Pragmatically, it’s a more effective use of  your  resources: staff, time, and money.

#itashare

Changing Culture: Changing the Game

13 June 2014 by Clark Leave a Comment

I previously wrote about Sutton & Rao’s Scaling up Excellence, and have now finished a quick read of Connors & Smith’s  Change the Culture, Change the Game.  Both books cover roughly the same area, but in very different ways.  Sutton & Rao’s was very descriptive of the changes they observed and the emergent lessons.  Connors & Smith, on the other hand, are very prescriptive.  Yet both are telling similar stories with considerable overlap.

Let’s be clear, Connors & Smith have a model they want to sell you.  You get the model up front, and then implementation tools in the second half. Of course, you  aren’t supposed to actually try this without having their help.  As long as you’re clear on this aspect of the book, you can take the lessons learned and decide whether you’d apply them yourself or use their support.

They have a relatively clear model, that talks about the results you want, the actions people will have to take to get to the results,  the beliefs that are needed to guide those actions, and the experiences that will support those beliefs. They aptly point out that many change initiatives stop at the second step, and don’t get the necessity of the subsequent two steps. It’s a plausible story and model, where  the actions, beliefs, and experiences are the elements that create the culture that achieves the results.

Like Kirkpatrick’s levels, the notion is that you start with the results you need, and work backward.  Further, everything has to be aligned: you have to determine what actions will achieve the new results, and then what new beliefs can   guide those new actions, and ultimately what  experiences are needed  to foster those new beliefs.  You work rigorously to only focus on the ones that will make a difference, recognizing that too much will impact the outcome.

The second half talks about tools to foster these steps. There are management tools,  leadership  skills, and  integration steps.  There’s necessary training associated with these, and then coaching (this is the sales bit).   It’s very formulaic, and makes it sound like close adherence to these approaches will lead to success.  That said, there is a clear recognition that you need to continually check on how it’s going, and be active in making things happen.

And this is where there’s overlap with Sutton & Rao: it’s about ongoing effort, it requires accountability (being willing to take ownership of outcomes),  people must be  engaged and involved, etc.  Both are different approaches to dealing with the same issue: working systematically to make necessary changes in an organization. And in both cases, the arguments are pretty compelling that it takes transparency and commitment by the leadership to walk the talk.  It’s up to the executives to choose the needed change, but the empowerment to find ways to make that happens is diffused downward.

Whether you like the more organic approach of Sutton & Rao or the more formulaic model of Connors & Smith, you will find insight into the elements that facilitate change.  For me, the synergy was nice to see.  Now we’ll see if these are still old-school by comparison to Laloux’s  Reinventing Organizations,  that has received strong support  from some  colleagues I have learned to trust.

#itashare

 

Vale Don Kirkpatrick

28 May 2014 by Clark 2 Comments

Last week, Don Kirkpatrick passed away.  Known for his four ‘levels‘ of measuring learning, he’s been hailed and excoriated.  And it’s instructive to see why on both sides.

He derived his model as an approach to determine the impact of an intervention on organizational performance.  He felt that you worked backward from the change you needed, to determine whether the workplace performance was changing, as then to see if that could be attributed to the training, and ultimately to the learner.  He numbered his steps so that step 1 was seeing what learners thought, 2 was that learners  could demonstrate a change, 3 was that the change was showing up in the workplace post intervention, and 4 was it impacting business measures.

This actually made a lot of sense. Rather than measuring the cost of hour of seat time or some other measure of efficiency, or, worse, not measuring at all, here was a plan that was designed to focus on meaningful change that the business needed.  It was obvious, and yet also obviously needed.    So his success in bringing awareness to the topic of business impact is to be lauded.

There were two major  problems, however.  For one, having numbered it the way that it was, people seemed that they could take a partial attempt.  Research shows that the number of people would only do step 1 or 2, and these are useless without ultimately including 4.  He even later wondered if he should have numbered the approach in the reverse.  The numbers have been documented (from a presentation with results from the ASTD Benchmarking Forum) as dropping in implementation from 94% doing level 1, 34% doing level 2, 13% doing level 3, and 3% doing level 4.  That’s  not  the idea!

The second problem was that whether or not he intended it (and there are reasons to believe he didn’t), it become associated only with training interventions.  Performance support interventions or social network outcomes  could similarly be measured (at least on levels 3 and 4), yet the language was all about training, which made it easy for folks to wrongly conclude that training was your only tool.  And we still see folks using courses as the only tool in their repertoire, which just isn’t aligned with how we think, work, and learn (hence the revolution).

Kirkpatrick rode this tool for the rest of his career,  created a family business in it, and he wasn’t shy about suggesting that you buy a book to learn about it.  I certainly can’t fault him for it either, as he did have a sensible model and it could be put into effective use.  There are worse ways to earn a living.

Others have played upon his model.  The Phillips have made a similar career with their fifth level, ROI, measuring the cost of impacting level 4 against the value of the impact.  Which isn’t a bad move to make  after you focus on making an impact.  Similarly, a client opined that there was also level 0, are the learners even showing up for the training!

In assessing the impact, part of me is mindful that tools can be used for good or ill.  Powerpoint doesn’t kill people, people do, as the saying goes.  Still, Kirkpatrick could’ve renumbered the steps, or been more outspoken about the problems with just step 1.

So, I laud his insight, and bemoan the ultimate lack of impact.  However, I reckon it’s better to argue about it than be ignorant.  Rest in peace.

Getting contextual

21 May 2014 by Clark Leave a Comment

For the current ADL webinar series on mobile, I gave a presentation on contextualizing mobile in the larger picture of L&D (a natural extension of my most recent books).  And a question came up about whether I thought wearables constituted mobile.  Naturally my answer was yes, but I realized there’s a larger issue, one that gets meta as well as mobile.

So, I’ve argued that we should be looking at models for guiding our behavior.  That we should be creating them by abstracting from successful practices, we should be conceptualizing them, or adopting them from other areas.  A good model, with rich conceptual relationships, provides a basis for explaining what has happened, and predicting what will happen, giving us a basis for making decisions.  Which means they need to be as context-independent as possible.

WorkOppsSo, for instance, when I developed the mobile models I use, e.g. the 4C‘s and the applications of learning (see figure), I deliberately tried to create an understanding that would transcend the rapid changes that are characterizing mobile, and make them appropriately recontextualizable.

In the case of mobile, one of the unique opportunities is contextualization.  That means using information about where you are, when you are, which way you’re looking, temperature or barometric pressure, or even your own state: blood pressure, blood sugar, galvanic skin response, or whatever else skin sensors can detect.

To put that into context (see what I did there): with desktop learning, augmenting formal could be emails that provide new examples or practice that spread out over time. With a smartphone  you can do the same, but you could also have a localized information so that because of where you were you might get information related to a learning goal. With a wearable, you might get some information because of what you’re looking at (e.g. a translation or a connection to something else you know), or due to your state (too anxious, stop and wait ’til you calm down).

Similarly for performance support: with a smartphone you could take what comes through the camera and add it onto what shows on the screen; with glasses you could lay it on the visual field.  With a watch or a ring, you might have an audio narration.  And we’ve already seen how the accelerometers in fit bracelets can track your activity and put it in context for you.

Social can not only connect you to who you need to know, regardless of device or channel, but also signal you that someone’s near, detecting their face or voice, and clue you in that you’ve met this person before.  Or find someone that you should meet because you’re nearby.

All of the above are using contextual information to augment the other tasks you’re doing.  The point is that you map the technology to the need, and infer the possibilities.  Models are a better basis for elearning, too so that you teach transferable understandings (made concrete in practice) rather than specifics that can get outdated.  This is one of the elements we placed in the Serious eLearning Manifesto, of course.  They’re also useful for coaching & mentoring as well, as for problem-solving, innovating, and more.

Models are powerful tools for thinking, and good ones will support the broadest possible uses.  And that’s why I collect them, think in terms of them, create them, and most importantly, use them in my work.   I encourage you to ensure that you’re using models appropriately to guide you to new opportunities, solutions, and success.

Peeling the onion

15 May 2014 by Clark 2 Comments

I’ve been talking a bit recently about deepening formal design, specifically to achieve learning that’s flexible, persistent, and develops the learner’s abilities to become self-sustaining in work and life.  That is, not just for a course, but for a curriculum.  And it’s more than just what we talked about in the Serious eLearning Manifesto, though of course it starts there.    So, to begin with, it needs to start with meaningful objectives, provide related practice, and be trialed and developed, but there’s more, there are layers of development that wrap around the core.

One element I want to suggest is important is also in the Manifesto, but I want to push a bit deeper here.  I worked to put in that the elements behind, say, a procedure or a task, that you apply to problems, are models or concepts.  That is, a connected body of conceptual relationships that tie together your beliefs about why it should be done this way.  For example, if you’ve a procedure or process you want people to follow, there is (or should be) a  rationale  behind it.

And  you should help learners discover and see the relationships between the model and the steps, through examples and the feedback they get on practice.  If they can internalize the understanding behind steps, they are better prepared for the inevitable changes to the tools they use, the materials they work on, or the process changes what will come from innovation.  Training them on X, when X will ultimately shift to Y, isn’t as helpful unless you help them understand the principles that led to performance on X and will transfer to Y.

Another element is that the output of the activities should create scrutable deliverables  and  also annotate the thoughts behind the result.  These provide evidence of the thinking both implicit and explicit, a basis for mentors/instructors to understand what’s good, and what still may need to be addressed, tin the learner’s thinking.  There’s also the creation of a portfolio of work which belongs to the learner and can represent what they are capable of.

Of course, the choices of activities for the learner initially, and the design of them to make them engaging, by being meaningful to the learner in important ways, is another layer of sophistication in the design.  It can’t just be that you give the traditional boring problems, but instead the challenges need to be contextualized. More than that (which is already in the Manifesto), you want to use exaggeration and story to really make the challenges compelling.  Learning  should   be hard fun.

Another layer is that of 21st Century skills (for examples, the SCANS competencies).  These can’t be taught separately, they really need to manifest across whatever domain learnings you are doing. So you need learners to not just learn concepts, but apply those concepts to specific problems. And, in the requirements of the problem, you build in opportunities to problem-solve, communicate, collaborate, e.g. all the foundational and workplace skills. They need to reappear again and again and be assessed (and developed) separately.

Ultimately, you want the learner to be taking on responsibility themselves.  Later assignments should include the learner being given parameters and choosing appropriate deliverables and formats for communication.  And this requires and additional layer, a layer of annotation on the learning design. The learners need to be seeing  why the learning was so designed, so that they can internalize the principles of good design and so become self-improving learners. You, for example, in reading this far, have chosen to do this as part of your own learning, and hopefully it’s a worthwhile investment.  That’s the point; you want learners to continue to seek out challenges, and resources to succeed, as part of their ongoing self-development, and that comes by having seen learning design and been handed the keys at some point on the journey, with support that’s gradually faded.

The nuances of this are not trivial, but I want to suggest that they  are doable.  It’s a subtle interweaving, to be sure, but once you’ve got your mind around it (with scaffolded practice :), my claim is that it can be done, reliably and repeatedly.   And it should.  To do less is to miss some of the necessary elements for successful support of  an individual to become the capable and continually self-improving learner that we need.

I touched on most of this when I was talking about Activity-Based Learning, but it’s worthwhile to revisit it (at least for me :).

Facilitating Innovation

13 May 2014 by Clark 4 Comments

One of the things that emerged at the recent A(S)TD conference was that a particular gap might exist. While there are resources about learning design, performance support design, social networking, and more, there’s less guidance about facilitating innovation.  Which led me to think a wee bit about what might be involved.  Here’s a first take.

So, first, what are the elements of innovation?  Well, whether you  listen to Stephen Berlin Johnson on the story of innovation, or Keith Sawyer on ways to foster innovation, you’ll see that innovation isn’t individual.  In previous work, I looked at models of innovation, and found that either you mutated an existing design, or meld two designs together.  Regardless, it comes from working and playing well together.

The research suggests that you  need to make sure you are addressing the right problem, diverge on possible solutions via diverse teams under good process, create interim representations, test, refine, repeat.  The point being that the right folks need to work together over time.

The barriers are several.  For one, you need to get the cultural elements right: welcoming diversity, openness to new ideas, safe to contribute, and time for reflection.  Without being able to get the complementary inputs, and getting everyone to contribute, the likelihood of the best outcome is diminished.

You also shouldn’t take for granted that everyone knows how to work and play well together.  Someone may not be able to ask for help in effective ways, or perhaps more likely, others may offer input in ways that minimize the likelihood that they’ll be considered.  People may not use the right tools for the job, either not being aware of the full range (I see this all the time), or just have different ways of working. And folks may not know how to conduct brainstorming and problem-solving processes effectively  (I see this as well).

So, the facilitation role has many opportunities to increase the quality of the outcome.  Helping establish culture, first of all, is really important.  A second role would be to understand and promote the match of tools to need. This requires, by the way, staying on top of the available tools.  Being concrete about learning and problem-solving processes, and  educating them and looking for situations that need facilitation, is another role  Both starting up front and educating folks before these skills are needed are good, and then monitoring for opportunities to tune those skills are valuable.  Finally, developing process facilitation skills,  serving in that role or developing the skills, or both, are critical.

Innovation isn’t an event, it’s a process, and it’s something that I want P&D (Learning & Development 2.0 :) to be supporting. The organization needs it, and who better?

#itashare

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • March 2026
  • February 2026
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.