Learnlets
Clark Quinn's Learnings about Learning
(The Official Quinnovation blog)

31 July 2009

Virtual Worlds #lrnchat

Clark @ 3:09 pm

In last night’s #lrnchat, the topic was virtual worlds (VWs).  This was largely because several of the organizers had recently attended one or another of the SRI/ADL meetings on the topic, but also because one of the organizers (@KoreenOlbrish) is majorly active in the business of virtual worlds for learning through her company Tandem Learning.  It was a lively session, as always.

The first question to be addressed was whether virtual worlds had been over or underhyped.  The question isn’t one or the other, of course.  Some felt underhyped, as there’s great potential. Others thought they’d been overhyped, as there’s lots of noise, but few real examples.  Both are true, of course.  Everyone pretty much derided the presentation of powerpoints in Second Life, however (and rightly so!).

The second question explored when and where virtual worlds make sense.  Others echoed my prevailing view that VW’s are best for inherently 3D and social environments.  Some interesting nuances came in exploring the thought that that 3D doesn’t have to be our scale, but we can do micro or macro 3D explorations as well, and not just distance, but also time. Imagine exploring a slowed down, expanded version of a chemical reaction with an expert chemist!  Another good idea was for contextualized role plays.  Have to agree with that one.

Barriers were explored, and of course value propositions and technical issues ruled the day. Making the case is one problem (a Forrester report was cited that says enterprises do not yet get VWs), and the technical (and cognitive) overhead is another.  I wasn’t the only one who mentioned standards.

Another interesting challenge was the lack of experience in designing learning in such environments.  It’s still new days, I’ll suggest, and a lot of what’s being done is reproductions of other activities in the new environment (the classic problem: initial uses of new technology mirror old technology).  I suggested that we’ve principles (what good learning is and what VW affordances are) that should guide us to new applications without having to have that ‘reproduction’ stage.

I should note that having principles does not preclude new opportunities coming from experimentation, and I laud such initiatives.  I’ve opined before that it’s an extension of the principles from Engaging Learning combined with social learning, both areas I’ve experience in, so I’m hoping to find a chance to really get into it, too.

The third question explored what lessons can be learned from social media to enhance appropriate adoption of VWs.  Comments included that they needed to be more accessible and reliable, that they’ll take nurturing, and that they’ll have to be affordable.

As always, the lrnchat was lively, fun, and informative.  If you haven’t tried one, I encourage to at least take it for a trial run. It’s not for everyone, but some admitted to it being an addiction! ;)  You can find out more at the #lrnchat site.

For those who are interested in more about VWs, I want to mention that there will be a virtual world event here in Northern California September 23-24, the 3D Training, Learning, & Collaboration conference.  In addition to Koreen, people like Eilif Trondsen, & Tony O’Driscoll (who has a forthcoming book with Karl Kapp on VW learning) will be speaking,  and companies like IBM and ThinkBalm are represented, so it should be a good thing. I hope to go (and pointing to it may make that happen, full disclaimer :).  If you go, let me know!

30 July 2009

Making designing good learning easier

Clark @ 2:19 pm

On my last post, I got a comment that really made me think.  The problem was content coming as PPTs from SMEs, and the question was poignant: “Given limited time and resources on a project how can you plan in advance to ensure that your learning is engaging and creates effective outcomes?”  I commented a reply, but I’d like to elaborate on that.

I like the focus on the ‘planning’ part: what can you do up front to increase the quality of your learning outcomes?  It’s a recursive design problem: people need to be able to design better, what training, job aids, tools, and/or social learning can we develop to make this work?  Having just done this on a project where a team I was a member of  were responsible for generating a whole curriculum around the domain, I can speak with some confidence about how to make this work.

First, are the tools.  Too often, the templates enforce rigor around having the elements, rather than about what makes those elements really work.  So, on the project, I not only guided the design of the templates, but the definitions associated with the elements that helped ensure they accomplished the necessary learning activities.  For example, it’s no good to have an introduction that doesn’t activate the relevant prior experience and knowledge, doesn’t help the learner comprehend why this learning is important, or even accomplishes this in an aversive way (can you say: “pre-test“?  :).  This is the performance support component, that helps make it easy to do things well and more difficult to do the wrong thing.  Similarly with ensuring meaningful activity in the first place, etc.

Next is the understanding.  This comes both by creating a shared understanding in the team, and then refining the process, making the outcome a ‘habit’.  First, I’d worked with some of the team before, so they shared my design principles, then I presented and co-developed with the client that understanding.  Then, as first draft content came out, I’d critique it and used that to tune the template, and the understanding amongst the content developers.

The involvement in refining the design process took some time, but really paid off as the quality of the resulting output took a steep increase and then stabilized as good quality learning experience yet reproducible in a cost-effective way and sustainable and manageable way.

As I’ve mentioned before, the nuances between bad elearning and really effective and engaging content are subtle to the untrained eye, but the outcomes are not, both subjectively from the learner’s experience, and objectively from the outcomes.  You should be collecting both those metrics, and reviewing the outcomes, as they both provide useful information about how your design is working (or not) and how to improve it.

If it matters, and it should, you really should be reviewing and tuning your processes to achieve engagement and learning outcomes.  It’s not more expensive, in the long term, though it does take more work.  But otherwise, it’s just a waste of money and that is expensive!  You’ll end up in the situation Charles Jenning’s cites, when”you might as well throw the money spent on these activities out the window.”  Don’t waste money, spend the time assuring that your learning design processes achieve what they need to.  Your organization, and your learners, will thank you.

28 July 2009

Creating Stellar Learning

Clark @ 9:50 am

Getting the details right about instructional design is quite hard, or at least it appears that way, judging from how many bad examples there are.  Yet the failures are more from a lack of knowledge rather than inherent complexity.  While there are some depths to the underlying principles that aren’t sufficiently known, they can be learned.  However, a second level of embedding systematic creativity into the process is another component that’s also missed, however this time it’s from a broken process more than a lack of knowledge.

What we want are learning solutions that really shine: where the learning experience is engaging, efficient, and effective.  Whether you’re creating products for commercial sale, or solutions for internal or external partners, you want to take your learning experience design to the next level.  So, how does an organization improve their learning design process to create stellar learning?

Let’s go through this, step by step.  First, you’ve got to know what you should be doing. I’ve gone on before about what’s broken in learning design, and what needs to be done.  That can be learned, developed, practiced, and refined.  Ideally, you’d have a team with a shared understanding of what really good learning is composed of and looks like. But it’s not just the deep learning.

There’s more: the team needs to develop both the understanding of the learning principles, and a creative approach that encourages striking a balance between pragmatic constraints and a compelling experience.  Note that creating a compelling experience isn’t about wildly expensive productive values, but instead about ensuring meaningfulness, both of the content, and the context (read: examples and practice). The learners have to be engaged cognitively and emotionally, challenged to work through and apply the material, to really develop the skills. If not, why bother?  Again, it’s not about expensive media; it can be done in text, for crying out loud! (Not that I’m advocating that, but just to emphasize it’s about design, not media.)

I find that it’s not that designer’s aren’t creative, however, but that there’s just no tolerance in the system for taking that creative step.  Yes, it can be hard to break out of old approaches, but there has to be an appreciation for the value of creating engaging experiences.  I will admit that initially the process may take a bit longer, but with practice the design doesn’t take longer, yet the results are far better.  It does, however, take a shared understanding of what an engaging experience is just as it takes the understanding of the nuances of creating meaningful learning.

And that level of understanding about both deep learning and creative experience design can be developed as a shared understanding among your team in very pragmatic ways (applying those principles to the design of that learning, too).   It’s just not conscionable anymore to be doing just mediocre design.  It won’t lead to learning and is a waste of money, as well as a waste of learner’s time.

That covers the design, and even a bit of the process, but what’s needed is a look at your design tools and processes. And I’m not talking about whether you use Flash or not, what I’m talking about is your templates.  They can, and should, be structured to support the design I’m talking about.  Too often, the constraints in existence stifle the very depth and creativity needed, saddling them with unnecessary components and not requiring the appropriate ones.  Factors that can be improved include templates for design, tools for creation, and even underlying content models!  They all have to strike the balance between supportive structure and lack of confinement.

Look, I’ve worked numerous times on projects where I’ve helped teams understand the principles, refine their processes, and yielded far better outcomes than you usually get.  It’s doable!  Yes, it takes some time and work, but the outcome is far better. On the flip side, I’ve reliably gone through and eviscerated mediocre design, systematically.  The point is not to make others look bad, but instead to point out where and how to improve product.  Those flaws from the teams that developed it can be remedied.  Teams can learn good design.  My goal, after all, is better learning!

A caveat: to the untrained eye, the nuances are subtle.  That’s why it’s easy to slide by mediocre design that looks good to the undiscerning stakeholder.  Stellar design doesn’t seem that much better, until you ascertain the learner’s subjective experience, and look at the outcomes as well.  In fact, I recall one situation where there was a complaint from a manager about why the outcome didn’t look that different.  I walked that manager through the design, and the complaints changed to accolades.

You should do it because it’s the right thing to do, but you can justify it as well (and when you do walk folks through the nuances, they’ll learn that you really do know what you’re talking about).  There’s just no excuse for any more bad learning, so please, please, let’s start creating good learning experiences.

23 July 2009

Intensive and Extensive Processing: Making Formal Stickier

Clark @ 2:08 pm

I’ve been thinking around the ways to use social learning to augment formal learning, and it’s bringing interesting things together.  The point is that there are things that make formal learning work better, and we want to draw upon them in smart ways.

We have, as I pointed out in the Broken ID series, elements we know lead to better learning: better retention over time, and better transfer to all appropriate situations (and no inappropriate).  These things include activating emotional and cognitive relevance, presenting the associated concepts, showing examples that link concept to context, having learners apply concept to context, and wrapping up the experience.  Several things, however, facilitate the depth and persistence of the learning: intensive processing, and extensive processing.

By extensive processing, I mean extending the learning experience.  I’ve previously talked about how Q2learning has a model where they can wrap a variety of activities together to describe a full competency preparation, including different forms of content, events, feedback, etc.  The point is that a single event has a low likelihood of achieving meaningful outcomes.  We need reactivation, as massed practice isn’t as effective as spaced practice.

There’s nothing wrong with a F2F session, if you can justify the opportunity & logistical costs, but it’s typically not enough by itself.  You’re better off making sure everyone’s on the same page at the start, reactivating later, doing individual assessment and looking for ways to help the individual afterward as well.  However, we want to extend the time spent in processing the concept and skills, not necessarily in quantity, but qualitatively from one big mass to many smaller activations.  Will Thalheimer does a good job of helping us recognize that breaking up learning works better, but we need to take more concrete advantage of the potential of technology to support this.

The other area is increasing the depth of the processing. There are activities that can be done individually, and some that are facilitated by social as well.

I’ve previously talked about how we can use social tools to facilitate formal learning, but I want to go a little bit deeper.  I suggested three forms of processing: personalization, elaboration, and application. For personalization, I have used in the past that learners keep a journal where they have to regularly reflect on how the learning is relevant to them (and a blog is a great tool for this). It occurs to me that there are three good ways to have them do this. I suggest a recommendation of 3 reflections per week for traditional learning, and for learners who need a guide, 3 different types of processing including how what they’ve learned explains something in their past, how it suggests what they’ll do differently going further, and/or how it connects to something else in their life.

That latter is a personal version of the more general task of having learners elaborate the content.  Thiagi has game frameworks that extend processing, pretty much content independently, and these are good, but there are more content-specific tasks as well.  You can design questions that require learners to reprocess the information specifically in relation to how it’s applied.  This can be to take a position on a controversial issue, or have them connect it to another concept (really helpful for setting up a subsequent concept), or explore a facet or nuance.  Discussion forums can be good here, ideally  having learners posting their own response before going in and seeing others (and having them comment constructively on one or several other posts).

Obviously, practice applying the concept to problems is the most important form of processing. While the best practice is mentored real practice, the problems with that (cost of mistakes, scalability of individual mentoring) mean games (or, to be PC™, immersive learning simulations) are another great practice.  However, don’t forget the reflection!  Reflection is an important form of processing after action, and one of the technology-mediated benefits is being able to capture individual performance and debrief it.

Another meaningful form of practice, particularly for knowledge work, is having a group work together to resolve a problem.  Providing a challenge that mimics one in the real world (e.g. responding to an RFP) that has enough deliberate ambiguity to generate productive discussion is great.  The discussion where learners are forced to come to a shared understanding that’s reflected in their response is highly likely to be fruitful, particularly if you’re careful in the design of the activity.  I recall an academic colleague who responded to my query about not using games by relating how expensive digital production was, but how inexpensive group activity was.  Again, a social augment to facilitate deep processing.

With a focus on creating meaningful processing, we can ensure that when we need to design real skill shifts (another story is ensuring that’s this is such a situation), we will think about ways to intensify, and extend, the processing to truly achieve the outcomes we need.  Ok, have you processed that?

20 July 2009

Standards and success

Clark @ 3:28 pm

Apparently, Google has recently opined that the future of mobile is web standards.  While this is wonderfully vindicating, I think there’s something more important going on here, as it plays out for a broader spectrum than just mobile.

I’ve been reflecting on the benefits that standards have provided.  What worked for networks was the standardization on TCP/IP as a protocol for packet transmission.  What worked for email was standardization on the SMTP protocol.  HTTP standardization has been good for the web, where it’s been implemented properly! What’s been a barrier are inconsistent implementations of web standards, like Microsoft’s non-standard versions of HTML for browsers and Java.

The source of the standard may be by committee, or by the originator.  Microsoft’s done well for itself with the Office suite of applications, and by opening up the XML version, they’re benefiting while not doing harm.  They own the space, and everyone has to at least read and write their format to have any credibility. While IMS & IEEE held meetings to get learning content standards nailed down, ADL just put their foot down with SCORM (and US Defense is a big foot), and it pretty much got everyone’s attention.  But it’s having standards that matters.  The fact that Blu-ray finally won the battle has really opened up the market for high definition video!

On the other hand, keeping proprietary standards has hindered development.  At the recent VW talks hosted by SRI, one of the topics was the inability to transfer a character between platforms.  That’s good for the providers, but bad for the development of the field.  Eventually, one format will emerge, but it may take committees, or it may be that someone like Linden Labs will own the space sufficiently that everyone will lock into a format they provide. Until then, any investment has trouble being leveraged in a longer term picture, as the companies you go with may not survive!  There’s an old saying about how wonderful standards are because there are so many of them.  The problem is when they’re around the same thing!  I was regaling a colleague with the time I smoked (er, caused to burn up, not lighting up!) an interface card by trying to connect two computers to exchange data. One manufacturer had, contrary to the standard, decided to put 12 volts on a particular pin!

And, unfortunately, in the mobile space, the major providers here in the US want to lock you into their walled garden, as opposed to, say, Europe, where all the phones have pretty much the same abilities to access data.  This has been a barrier to development of services.  The web is increasingly powerful, with HTML5, and so while some things won’t work, web-based applications are defaulting to the lingua franca for not just content exchange but interactive activities.  The US is embarrassingly behind, despite the leading platforms (iPhone, Pre, etc).

In one sense this is sad that we can’t do better, but at least it’s good to have the web as a fallback now.  We can make progress when it doesn’t matter what device, or OS, you’re using, as long as you can connect.  The real news is that there is a lingua franca for mobile that you can use, so really there aren’t any reasons to hold off any longer.  Ellen Wagner sees a tipping point, and I’m pleased to agree.  There may be barriers for enterprise adoption, but as I frequently say: it’s  not the technology, the barriers are between our ears (and maybe our pocketbooks :).

Update: forgot my own punchline.  Standards need to be, or at least become, open and extensible for real progress to be made.  When others can leverage, the greatest innovations can occur.

Standards are hard work, but the benefits for progress are huge.  This holds true in your organization, as well.  Are you paying attention to standards you should be using, and what you should standardize yourself?

15 July 2009

Mining Social Media

Clark @ 4:06 pm

One of the proposed benefits of social media is the capture of knowledge that’s shared, taking the tacit and making it explicit.  But really, how do we do this?  I think we need to separate out the real from the ideal.

The underlying premise is that we have an enlightened organization that’s empowering collaboration, communication, problem-solving, innovation, etc (what I’m beginning to term ‘inspiration’ in all senses of the word) by providing a social media infrastructure, learning scaffolding, and a supportive culture.  Now, all these people are sharing, but are we, and can we be, leveraging that knowledge?

The obvious first answer is that by sharing it with others, it’s being leveraged.  If information is shared with the relevant people, it’s been captured for organizational use by being spread appropriately.  That’s great, and far too few organizations are facilitating this in a systematic way.  However, I’m always looking for the optimal outcome: not just the best that is seen, but the best that can be. So how can we go further?

The typical response is using data mining that focuses on semantic content: systematically parsing the discussions, and using powerful semantic tools to attempt to capture, characterize, and leverage information systemically. (Hmm, you could map out the knowledge propositions, and link them into coherent chains and then track those over time to see significant changes, even regularly re-sort to see if different perspectives are changing…oh, sorry, got carried away, enough adaptive system designing :).

In terms of social media systems, while there are analytics available, semantics are not part of it, as far as I can see.  Further, I searched on social media mining, and found out that the first international workshop will be happening in November, but it’s not happened yet. There’s an interesting PhD thesis on the topic from UMaryland, but it’s focused on blogs and recommendations. In other words, it’s not ready for prime time.

The point is, that machine learning and knowledge mining mechanisms are in our future, but not our present.  Don’t get me wrong, there are huge possibilities and opportunities here, but they’re a ways off.   So, are we back to the best that can be?  I want to suggest one other possibility.  The systemic mechanisms are nice because, set up properly, they run regardless, but there’s another approach, and that’s human processing.  For all the advances in technology, our brains are still pretty much the most practical semantic pattern matching engines going.  So how would that work?

Well, let’s go back to the role that learning professionals play. We’ve already looked at how they could change as learning units take over responsibility for the broader picture of learning in the organization.  Learning professionals need to be nurturing social learning, and that means being in there, monitoring discussions for opportunities to draw out other members, spark useful feedback, develop skills, and more.

Well, they also can and should be looking for outcomes that could be redesigned/redeveloped/reproduced for broader dissemination.  They should be monitoring what’s happening and looking for information that’s worth culling out and distilling into something that’ll really bring out the impact of that information. Turning information into knowledge and even wisdom!

Yes, that’s a greater responsibility (though it’s also fun; you shouldn’t be in the learning space if you don’t love learning!).  It’s a new skill set, but I’ve already argued that.  The world’s changing, and the status quo won’t last long anyway.  So, while you can just allow and hope that individuals will perceive the value of the information created, and even facilitate by encouraging people to participate in all the relevant communities (which will likely cross role, product/service, and more), there’s a step further that’s to the benefit of the organization and the learners.

We’ll steadily build support for that process, but it will be facilitated, and advanced, by individual practice to complement, supplement, and inform the mechanistic approaches.  Don’t ignore this role; plan for it, prepare for it, and skill for it.  Responsibility for recognizing should be shared, so that the individuals in the network are also doing it (for example, retweeting valuable information), and that’s a learning skill that should be developed.

Here’s hoping you find this valuable!

14 July 2009

Implementing Learning Redesign

Clark @ 3:10 pm

In my Broken ID series, I talked about the mistakes people made and how the elements of elearning should be redesigned.  I didn’t talk about how you’d revamp your design processes to achieve the results.  And I should, because it’s easy to ‘get’ the concepts, harder to turn around and revise your organizational design processes so that they systematically are providing improved design. I’ve been involved in improving organizational design processes in several different instances, and it took several different steps to lead to persistent change.

Naturally, it starts with a good vision; you’ve got to have a sound basis for good design on tap.  The Broken ID series is a good start (and there others), but it takes more than that.

The next step naturally is working through the implications for the design process, mapping out the principles and how they play out in practice makes the design guidance concrete.  It helps if everyone’s on the same page, and a shared understanding has been negotiated, so developing this as a team is valuable.  Having this facilitated by someone who can help interpret the principles through concrete examples and then applying it to inhouse work product is ideal, but even internal workshopping would likely provide some improvement.

Of course, this works better if the frameworks and design tools are aligned with this new vision.  That is, any design templates need to be reviewed and updated, or design support needs to be created.  The point is to provide scaffolding because old approaches are hard to shift. Think of it as performance support for design.

When I’ve been part of making this work in the past, a real benefit has come from having the first outputs from the design process be reviewed.  External review has advantages, but even peer review (those who have not been part of the generating design team) can be advantageous.  Document the mistakes made (anonymously may be desirable), or at least the remedies, and share them, so others learn from the process.

Finally, putting in place processes around the design process, e.g. ensuring that the solutions are designed to meet strategic initiatives, is a level of extra care to help ensure that the learning solution is of benefit.  Not just ROI, but aligned to the business.

It’s surprisingly hard to make design changes persistent, and it’s been my experience that token efforts don’t lead to lasting results.  It takes a systematic effort so that it’s hard to go back, as opposed to being hard to continue.  That’s when you’ll find the change sticking.

There’s clearly still a deep need for better learning design, and the solution, while not trivial, is also not rocket science.  There is a straightforward set of steps that will yield better designs, by design, and it’s reasonable in resources and time.  Let’s practice what we preach, and design our design processes to be optimal, not just expedient. So,  no more excuses for bad design, please!

7 July 2009

Beyond Web 2.0

Clark @ 7:35 am

In preparing for a talk I’m going to give, I was thinking about how to represent the trends from web 1.0 through 2.0 to 3.0.  As I’ve mentioned before, in my mind 3.0 is the semantic web. I think of web 2.0 as really two things, the social read-write user-generated content web, and the web-services mashup web.  In elearning, we tend to focus on the former, but the latter is equally important.

Web2.0However, if we think about web 2.0 as user-generated content, we can think about 1.0 as producer-generated content.  The original web was what people savvy enough (whether tech or biz) could get up on the web.  The new web is where it’s easy for anyone to get content up, through blogs, photo-, video-, and slide-sharing sites, and more.

Extending that, what’s web 3.0 going to be?  If we take the semantic web concept, the reason we add these tags is for systems to start being able to use search and rules to find and custom-deliver content.  An extension, however, is to have the system generate the necessary content (cf Wolfram|Alpha).  In a sense, by knowing some things about you and your interests, needs, and activities, a system could proactively choose what and when to deliver information.

And that, to me, is really system-generated content, and a real opportunity.  It’s not ahead of what we can do (though I recognize it’s ahead of where most are ready to be; why do you think it’s called Quinnovation? :), but it’s certainly something to keep on your radar.  And when you’re ready, so am I!

6 July 2009

Web 2.0 Learning Skills

Clark @ 4:31 pm

The Learning Circuit’s blog big question of the month asks:

In a Learning 2.0 world, where learning and performance solutions take on a wider variety of forms and where churn happens at a much more rapid pace, what new skills and knowledge are required for learning professionals?

I have to say that there’s a lot in this.  Taking a performance ecosystem approach, we also need to recognize that the responsibility of the learning role is more than just courses, it’s performance support, social/informal learning, content models, mobile, and more.  How does this play out?

For one, it’s a shift in perspective.  The responsibility needs to be for all organizational learning, not just formal learning.  Who better?  This means understanding information design, usability, and information architecture as well as instructional design.  Also including for mobile, not just classroom and desktop.  Thus, we have expanded content development skills.

There is more, however. As my colleagues and I have been talking, it’s also clear that the role of the learning designer will likely move from exclusively a content developer to likely more time spent as a learning facilitator.  If we start having user-generated content, while we might occasionally be formalizing that, we’ll also need to be facilitating the learning process itself. We’ll have to be understanding how to nurture groups into cohesion, communication, and collaboration: how to catalyze discussions, how to maintain commitment, how to neutralize negativity, and and how to reach out to those who might feel alienated.

As a consequence, we’ll also have to understand organizational culture, the drivers and barriers to individuals feeling safe and valued to contribute. We’ll have to understand incentives, how to moderate behavior, how to align  vision.  It may not be completely within our power to address, but we have to know, recognize, and nurture useful cultural components, and when and how to point out problems to those who can change factors.

We won’t, for at least the short- and medium-term, be able to assume individual learning skills, also.  We’ll have to know what individual and group learning skills are, make those explicit, assess and nurture them, and value them. It will mean letting go, too, as Jane Bozarth points out.

Finally, we’ll have to be smarter about organizational goals, because all of this can’t immediately be done completely for everything, so we’ll have to prioritize.  We’ll have to earn the right to take on these responsibilities by showing that we know how they contribute to the organizational success.

If you don’t get this, we should talk. Developing these skills is critical, and the time to get moving is now.  Is your organization ready?

2 July 2009

Minimizing Transformative Disruption

Clark @ 2:23 pm

A tweet by @JoshuaKerievsky pointed me to the Satir Change Model, in the context of introducing agile programming. The model purports to capture the disruptive effects of a new idea until it’s internalized, and I find it resonates quite well.  My simplified version looks at it from the point of view of organizational change upon introduction of a new initiative, such as the organizational learning transformations I’m espousing and supporting.

OriginalSatirChangeCurve

In this simplified version, you can see that an intervention originally creates a decrement in performance, until the intervention takes hold, and then there are some hiccups incurred until the system stabilizes at a new and (hopefully) improved performance outcome.  While we want the improvement, the decrement is something we’d like to minimize.   However, how do we do that?

In researching it a little bit, I came upon a book that discussed using a stepwise approach to minimize it (also in software process improvements), and had a version of the diagram that demonstrated smaller decrements.StepwiseSatirChangeCurve By having smaller introductions that break up the intervention, you decrease the negative effects.  The point is to take small steps that make improvements instead of a monolithic change.

That’s what I’m trying to achieve by breaking up the organizational transformation implied by the performance ecosystem, and customizing it for an organization by prioritizing steps into next week, next month, next year, etc.  Of course, the diagram is only indicative, not prescriptive, but I trust you recognize what I mean.

The overall approach is to achieve the improvement, but in a staged way customized for a particular organization and context, not a one-size-fits-all approach that really won’t fit anyone.OverlappedSatirChangeCurve The goal is to maximize improvements while minimizing disruption, and doing so in ways that capitalize on previous efforts, existing infrastructure.  To do this really requires understanding how the different components relate: how content models support mobile, how performance support articulates with formal learning and social media, and more.  And, of course, understanding the nuances of the underpinning elements and how they are optimized.

Organizations can’t continue in the status quo of only formal learning, but I reckon many folks aren’t sure where and how to start.  That’s the point of using a framework that points out how the elements interact, and coupling that with an specific organizational assessment.  From there, you can prioritize steps, come up with action plans, be prepared to choose vendors, not have the vendor sell you on what they do best, and more.  You’ve got to have a plan, or where you end up may not be the best place for your organization.

I’m reminded of the Cheshire Cat and Alice:

“Would you tell me, please, which way I ought to go from here?”
“That depends a good deal on where you want to get to,” said the Cat.
“I don’t much care where–” said Alice.
“Then it doesn’t matter which way you go,” said the Cat. ”
–so long as I get SOMEWHERE,” Alice added as an explanation.
“Oh, you’re sure to do that,” said the Cat, “if you only walk long enough.”

So, do have a plan of where you want to get to, as well as an intent to start moving.

Powered by WordPress