Clark Quinn's Learnings about Learning
(The Official Quinnovation blog)

28 February 2009

Designing Learning

Clark @ 12:59 PM

Another way to think about what I was talking about yesterday in revisiting the training department is taking a broader view.  I was thinking about it as Learning Design, a view that incorporates instructional design, information design and experience design.

leiI’m leery of the term instructional design, as that label has been tarnished with too many cookie cutter examples and rote approaches to make me feel comfortable (see my Broken ID series).  However, real instructional design theory (particularly when it’s cognitive-, social-, and constructivist-aware) is great stuff (e.g. Merrill, Reigeluth, Keller, et al); it’s just that most of it’s been neutered in interpretation.  The point being, really understanding how people learn is critical.  And that includes Cross’ informal learning.  We need to go beyond just the formal courses, and provide ways for people to self-help, and group-help.

However, it’s not enough.  There’s also understanding information design.  Now, instructional designers who really know what they’re doing will say, yes, we take a step back and look at the larger picture, and sometimes it’s job aids, not courses.  But I mean more, here.  I’m talking about, when you do sites, job aids, or more, including the information architecture, information mapping, visual design, and more, to really communicate, and support the need to navigate. I see reasonable instructional design undone by bad interface design (and, of course, vice-versa).

Now, how much would you pay for that? But wait, there’s more!  A third component  is the experience design.  That is, viewing it not from a skill-transferral perspective, but instead from the emotional view.  Is the learner engaged, motivated, challenged, and left leaving fulfilled?  I reckon that’s largely ignored, yet myriad evidence is pointing us to the realization that the emotional connection matters.

We want to integrate the above.  Putting a different spin on it, it’s about the intersection of the cognitive, affective, conative, and social components of facilitating organizational performance.  We want the least we can to achieve that, and we want to support working alone and together.

There’s both a top-down and bottom-up component to this.  At the bottom, we’re analyzing how to meet learner needs, whether it’s fully wrapped with motivation, or just the necessary information, or providing the opportunity to work with others to answer the question.  It’s about infusing our design approaches with a richer picture, respecting our learner’s time, interests, and needs.

At the top, however, it’s looking at an organizational structure that supports people and leverages technology to optimize the ability of the individuals and groups to execute against the vision and mission.  From this perspective, it’s about learning/performance, technology, and business.

And it’s likely not something you can, or should, do on your own.  It’s too hard to be objective when you’re in the middle of it, and the breadth of knowledge to be brought to bear is far-reaching.  As I said yesterday, what I reckon is needed is a major revisit of the organizational approach to learning.  With partners we’ve been seeing it, and doing it, but we reckon there’s more that needs to be done.  Are you ready to step up to the plate and redesign your learning?

27 February 2009

Revisiting the Training Department

Clark @ 2:29 PM

Harold Jarche and Jay Cross have been talking about rethinking the training department, and I have to agree.  In principle, if there is a ‘training’ department, it needs to be coupled with a ‘performance’ department and a ‘social learning’ department, all under an organizational learning & performance umbrella.

What’s wrong with a training department?  Several things you’ll probably recognize: all problems have one answer – ‘a course’; no relationships to the groups providing the myriad of portals, no relationship to anyone doing any sort of social learning, no ‘big picture’ comprehension of the organization’s needs, and typically the courses aren’t that great either!

To put it another way, it’s not working for the organizational constituencies.  The novices aren’t being served because the courses are too focused on knowledge and not skills, aren’t sufficiently motivating to engage them, and use courses even when job aids would do.  The practitioners are not getting or able to find the information they need, and have trouble getting access to expert knowledge.  And experts aren’t able to collaborate with each other, and to work effectively with practitioners to solve problems.  Epic fail, as they say.  OK, so that’s a ‘straw man’, but I’ll suggest that it’s all too frequent.

The goal is a team serving the entire learnscape: looking at it holistically, matching needs to tools, nurturing communities, leveraging content overlap, and creating a performance-focused ecosystem.  I’ve argued before that such an approach is really the only sustainable way to support an organization.  However, that’s typically not what we see.

Instead, we tend to see different training groups making courses in their silos, with no links between their content (despite the natural relationships), often no link to content in portals, no systematic support for collaboration, and overall no focus on long-term development of individuals and capabilities.

So, how do we get there from here?  That’s not an easy answer, because (and this isn’t just consultant-speak) it depends on where the particular organization is at, and what makes sense as a particular end version, and what metrics are meaningful to the organization.  There are systematic ways to assess an organization (Jay, Harold, and I’ve drafted just such an instrument), and processes to follow to come up with recommendations for what you do tomorrow, next month, and next year.

The goal should be a plan, a strategy, to move towards the goal.  The path differs, as the starting points are organization-specific. One way to do it is DIY, if you’ve got the time; it’s cheaper, but more error-prone.  The fast track is to bring in assistance and take advantage of a high-value, lightweight infusion of the best thinking to set the course.  No points for guessing my recommendation.  But with the economic crisis and organizational ‘efficiencies’, can you afford to stick to the old ineffective path?

25 February 2009

This time, it’s personal…

Clark @ 12:54 PM

So on the way to dinner, my son told me on Friday that he’d tied a guy’s shoes together (the kid fell down when he tried to get up at the end of class, and was late to the next).  I asked, and this was a) a friend, b) a prank (the latest volley in an ongoing series),  c) the boy wasn’t hurt,  but d) was amused.  Unacceptable, still.  It was potentially dangerous, interfered with school operations, and consequently inappropriate. I chided him to that effect, and thought no more about it.  Until my wife let me know Monday night what the school administration had done as a consequence.

Three teachers, together, had reported it, not one of them talking to my son directly.  So he was called into the office, and the Vice Principal who handled it decided on lunch-time detention for two days, at a special table in the cafeteria.  We weren’t involved until afterwards, when my wife heard about it, and then talked to the VP on the second day.  OK, what he did wasn’t the smartest thing to do, and we absolutely believe that consequences are an appropriate response.  As my wife said, 95% of the time she’ll side with the teachers (her dad was one). So it’s not that there was a response, it’s just what the response was.  Our issue is with the process used, and the punishment.

Let’s start that he’s a good kid, who gets good grades because it’s expected of him, despite the fact that the current school situation is such that the content is dull, and the homework staggering (he’s opting out of sports because he doesn’t feel he has the time).  He’s bored at school, as the work’s too easy for him, and the repetitive drill is mind numbing.  However, no argument, his action wasn’t acceptable. In his case, being called to the office at all was probably enough, as the only previous time he’d been was to recognize him for something good he did.  Having a talking to,  for a first infraction, likely would weigh on him enough.  Some time for reflection and even writing an apology to the friend or the teachers  or just a treatise on the folly of the act would be rehabilitative, useful, and understandable. Instead, we have a punitive action.  “You’re bad, and we need to punish you.”

My wife talked to the VP, trying to point out that while intervention was certainly called for, public humiliation wasn’t. The VP denied that it was public, saying that the table is off to the side.  Yes, in the same room, and obviously the location of the ‘bad kids’.  As my son told us, a number of his friends walked by and commented.  I’m not buying it; it’s public humiliation, and that doesn’t make sense as a first recourse (if ever), particularly in a case of behavior that was bad judgment, not malicious.

So either I’m over-reacting, or the process they applied (the teachers not talking to him about it), and the result it came up with (public humiliation for a first offense) is broken.  While I admit it’s hard to be objective, I’m inclined to believe the latter.  Shouldn’t we be using misbehavior as opportunities to show how to respond appropriately?  We may have societally moved away from rehabilitation in our penal system, but in our education system?  What’s his lesson here?  I mean, we don’t put people in the stocks anymore!  Though I’m tempted, with a certain VP.  Of course, showing up (albeit it unnamed) in a blog post may be the same, eh?

22 February 2009

Monday Broken ID Series: Examples

Clark @ 11:28 AM

Previous Series Post |Next Series Post

This is one in a series of thoughts on some broken areas of ID that I’m posting for Mondays.  I intend to provide insight into many ways much of instructional design fails, and some pointers to avoid the problems. The point is not to say ‘bad designer’, but instead to point out how to do good design.

I see several reliable problems with examples, and they aren’t even the deepest problems. They tend to be mixed in with the concept, instead of separate, if they exist at all.  Then, when they do exist, too often they’re cookie-cutter examples, that don’t delve into the necessary elements that make examples successful, let alone are intrinsically interesting, yet we know what these elements are!

Conceptually, examples are applications of the concept in a context.  That is, we have a problem in a particular setting, and we want to use the model as a guide to solving the problem. Note that the choice of examples is important. The broader the transfer space, that is, the more general the skills, the more you want examples that differ in many respects.  Learners generalize the concept from the examples, and the extent to which they’ll generalize to all appropriate situations depends on the breadth of contexts they’ve seen (across both examples and practice).  You need to ensure that the contexts the learner sees are as broadly disparate as possible.

Note that we should also be choosing problems and contexts that are of interest to the audience.  Going beyond just the cognitive role, we should be trying to tap into the motivational and engagement factors.  Factor that into the example design as well!

Now, we know that examples have to show the steps that were taken.  They have to have specific steps from beginning to end.  And, I add, those steps have to refer back to the concept that guides the presentation.  You can’t just say “first you do this, then you do this”, etc, you have to say “first, using the model, you do this, and then the model says to do that”.  You need to show the steps, and the intermediate work products.  Annotating them is really important.

And that annotation is not just the steps, but also the underlying thought processes.  The problem is, experts don’t even have access to their thought processes anymore!  Yet, their thinking really works along lines like “well, I could’ve done A, but because of X, and thought B was a better approach, and then I could do C, but because of Y I tried D”, etc.  The point being, there’s a lot of contextual clues that they evaluate that aren’t even conscious, yet these clues are really important for learners. (BTW, this is one of the many reasons I recommend comics in elearning, thought bubbles are great for cognitive annotation.)

Another valuable component is showing mistakes and backtracking. This is a hard one to get your mind around, and yet it’s powerful both cognitively and emotionally.  First, experts model the behavior perfectly, and when learners try, they make mistakes, and may turn off emotionally (“I’m having trouble, and it looks so easy, I must not be good at this”).  In reality, experts make mistakes all the time, and learners need to know that. It keeps you from losing them altogether!

Cognitively it’s valuable, too.  When experts show backtracking and repair, they’re modeling the meta-skills that are part of the expertise.  Unpacking that self-monitoring helps learners internalize the ‘check your answer’ component that’s part of expert performance.  This takes more work on the part of the designer, like we had with the concept, but if the content is important (otherwise, why are you building a course), it’s worth doing right.

Finally, I believe it’s important to convey the example as a story.  Our brains are wired to comprehend stories, and a good narrative has better uptake.  Having a protagonist documenting the context and problem, and then solving it with the model to achieve meaningful outcomes, is more interesting, and consequently more memorable.  We can use a variety of media to tell stories, from prose, through audio (think mobile and podcasts) and narrated slideshow, animation, or video.  Comics are another channel.  Stories also are useful for conveying the underlying thought processes, via thought bubbles or reflective narration (“What was I thinking?…”).

So, please do good examples.  Be exemplary!

21 February 2009

Strategy, strategically

Clark @ 7:44 AM

In addition to working on the technology plan for my school district, I’ve also been assisting a not-for-profit trying to get strategic about technology.  The struggles are instructive, but looking across these two separate instances as well as the previous organizations I’ve assisted, I’m realizing that there are some common barriers.

The obvious one is time. The old saying about alligators and draining the swamp is too true, and it’s only getting worse.  Despite an economic stimulus package for the US and other countries, and (finally) a budget in my own state, things are not likely to get better soon.  Even if companies could hire back everyone they’ve laid off, the transition time would be significant.  It’s hard to sit back and reflect when you’re tackling more work with less resources.  Yet, we must.

The second part is more problematic.  Strategic thinking isn’t easy or obvious, at least to all.  For some it’s probably in their nature, but I reckon for most it takes a breadth of experience and an ability to abstract from that experience to take a broader perspective.  Abstraction, I know from my PhD research on analogy, isn’t well done without support.  Aligning that perspective with organizational goals simultaneously adds to the task.  Doing it keeping both short- and long-term values, for several different layers of stakeholders, and you’re talking some serious cognitive overhead.

We do need to take the time to be strategic.  As I was just explaining on a call, you don’t want to be taking small steps that aren’t working together towards a longer-term goal.  If you’re investing in X, and Y, and Z, and each one doesn’t build on each other, you’re missing an opportunity. If you’ve alternatives A & B, and A seems more expedient, if you haven’t looked to the future you might miss that B is a better long term investment.  If you don’t evaluate what else is going on, and leverage those initiatives because you’re just meeting your immediate needs, you’re not making the best investment for the organization, and putting yourself at risk.  You need to find a way to address the strategic position, at least for a percentage of your time (and that percentage goes up with your level in the organization).

To cope, we use frameworks and tools to help reduce the load, and follow processes to support systematicity and thoroughness. The performance ecosystem framework is one specific to use of technology to improve organizational learning, innovation, and problem-solving, but there are others.  Sometimes we bring in outside expertise to help, as we may be too tightly bound to the context and an external perspective can be more objective.

You can totally outsource it, to a big consulting company, but I reckon that the principle of ‘least assistance‘ holds here too.   You want to bring in top thinking in a lightweight way, rather than ending up with a bunch of interns trying to tie themselves to you at the wrist and ankles.  What can you do that will provide just the amount of help you need to make progress?  I have found that a lightweight approach can work in engagements with clients, so I know it can be done.  Regardless, however of wWhether you do it yourself, with partners, or bring in outside help, don’t abandon the forest for the trees, do take the time.  You need to be strategic, so be strategic about it!

20 February 2009

The ‘Least Assistance’ Principle

Clark @ 9:55 AM

While I agree vehemently with most of a post by Lars Hyland, he said one thing I slightly disagree with, and I want to elaborate on it.  He was disagreeing with  “buying rapid development tools to bash out ill formed ‘e-learning’ to an audience that will not only be unimpressed but also none the wiser – or more productive”, a point I want to nuance.  I agree with not using rapid elearning to create courses for novices, but there is a role for bashing out courses for another audience, the practitioner.  And there’s something deeper here to tease out.

I want to bring up John Carroll’s minimalist instruction, and highly recommend it to you. He focused on a) meaningful tasks, b) active learning quickly, c) including error recogition & recovery, and d) making learning activities self-contained (a lot like games, actually).  In The Nurnberg Funnel, he documented how this design led to 25 cards, 1 per learning goal, that beat a 94 page traditionally designed manual hands-down in outcomes.

Another way to think about it is something Jim Spohrer mentioned to me once. Now, Jim’s been an Apple Fellow, and is leading research at IBM’s Almaden Research Center.  He really cares and likes to help people, but he’s very busy.  So he adopted a ‘least assistance’ principle, where he would ask himself what’s the least he can do to get this person going, because there was more to do and more people to help than he was able to keep up with.  And I think it is a useful way to think about supporting learning.

This sounds a lot like performance support, and that’s definitely a mind-set we need to adopt. When Harold Jarche and Jay Cross talk about the death of the training department, they’re talking about not focusing on courses, and instead taking a broader, performance perspective.  Obviously, we want to talk about portals of resources, but we also need to recognize that there are formal learning situations that don’t require the full formality.

We develop full courses to incorporate motivation, practice, all the things non-self-directed learners need.  But there are times when we need to provide new information and skills to self-directed learners.  When we’re talking to practitioners who are good at their job, know what they’re doing and why, and know that they need to know this information and how they’ll apply it, we can strip away a lot of the window dressing. We can just provide support to a SME so that their talk presents the relevant bits  in a streamlined and effective way, and let them loose.   That, to me, is the role of rapid elearning.

It’s not for novices, but it’s effective, and more efficient.  In this economic climate, we don’t have the luxury of full development of courses for every need.  Moreover, in any climate, we shouldn’t give people what they don’t need, instead we need to focus on what the ‘least assistance’ we can give them is.

In many cases, the least assistance we can give is self-help, which is why I believe social learning tools are one of the best investments that can be made.  The answer may well be ‘out there’, and rather than for learning designers to try to track it down and capture it, the learner can send out the need  and there’s a good chance an answer will come back!  There’s a lot to making such an environment work; it’s not the case that ‘if you build it, they will learn’, but it’s still going to fill a sweet spot in the performance ecosystem that may not be being hit as of now.

Don’t look for everything you can do in one situation, unless you’re flush with too much time and resources (in which case, watch out!), instead look for the least you can do that will get the job done so you can do more for everybody. It’s likely that’s more to their taste, anyway. And that’s enough from me on that!

18 February 2009

Measuring the right things

Clark @ 1:39 PM

For sins in my past, I’ve been invited on to our school district’s technology committee.  So, yesterday evening I was there as we were reviewing and rewriting the technology plan (being new to the committee, I wasn’t there when the existing one was drafted).  Broken up into five parts, including curriculum, infrastructure, funding, I was on the professional development section, with a teacher and a library media specialist.  Bear with me, as the principles here are broader than schools.

The good news: they’d broken up goals into two categories, the teacher’s tech skills, and the integration of tech into the curriculum. And they were measuring the tech skills.

The bad news: they were measuring things like percentage of teachers who’d put up a web page (using the district’s licensed software), and the use of the district’s electronic grading system. And their professional development didn’t include support for revising lesson plans.

Houston, we have some disconnects!

So, let’s take a step back.  What matters?  What are we trying to achieve?  It’s that kids learn to use technology as a tool in achieving their goals: research, problem-solving, communication.  That means, their lessons need to naturally include technology use.  You don’t teach the tool, except as ancillary to doing things with it!

What would indicate we were achieving that goal?  An increase in the use of lesson plans that incorporate technology into non-technology topics would be the most direct indicator.  Systematically, across the grade levels.  One of the problems I’ve seen is that some teachers don’t feel comfortable with the technology, and then for a year their students don’t get that repeated exposure.  That’s a real handicap.

However, teacher’s lesson plans aren’t evaluated (!).  They range from systematic to adhoc.  The way teachers are evaluated is that they have to set two action research plans for the year, and they take steps and assess the outcomes (and are observed twice), and that constitutes their development and evaluation.  So, we determined that we could make one of those action research projects focus on incorporating technology (if, as the teacher in our group suggested, we can get the union to agree).

Then we needed to figure out how to get teachers the skills they need.  They were assessed on their computer skills once a year, and courses were available.  However, there was no link between the assessment and courses.  A teacher complained that the test was a waste of time, and then revealed that it’s 15-30 minutes once a year.  The issue wasn’t really the time, it’s that the assessment wasn’t used for the teachers.

And instead of just tech courses, I want them to be working on lesson plans, and, ideally, using the tools to do so.  So instead of courses on software, I suggested that they need to get together regularly (they already meet by grade level, so all fifth grade teachers at a school meet together once a week) and work together on new lesson plans.  Actually, I think they need to dissect some good examples, then take an existing lesson plan and work to infuse it with appropriate technology, and then move towards creating new lesson plans.  To do so, of course, they’ll need to de-emphasize something.

Naturally, I suggested that they use wikis to share the efforts across the schools in the district, but that’s probably a faint hope.  We need to drive them into using the tools, so it would be a great requirement, but the level of technology skills is woefully behind the times.  That may need to be a later step.

One of the realizations is that, on maybe a ten-year window, this problem may disappear: those who can’t or won’t use tech will retire, and the new teachers will have it by nature of the culture.  So it may be a short-term need, but it is critical.  I can’t help feeling sorry for those students who miss a year or more owing to one teacher’s inability to make a transition.

At the end, we presented our results to the group.  We’ll see what happens, but we’ve a new coordinator who seems enthusiastic and yet realistic, so we’ll see what happens.  Fingers crossed! But at least we’ve tried to show how you could go towards important goals within the constraints of the system.  What ends up in the plan remains to be seen, but it’s just a school-level model of the process I advocate at the organizational level.  Identify what the important changes are, and align the elements to achieve it (a bit like ID, really).  If you’re going to bother, do it right, no?

15 February 2009

Monday Broken ID Series: Concept Presentation

Clark @ 12:07 PM

Previous Series Post | Next Series Post

This is one in a series of thoughts on some broken areas of ID that I’m posting for Mondays.  The intention is to provide insight into many ways much of instructional design fails, and some pointers to avoid the problems. The point is not to say ‘bad designer’, but instead to point out how to do better design.

At some point (typically, after the introduction) we need to present the concept.  The concept is the key to the learning, really.  While we’ve derived our ultimate alignment from the performance objective, the concept provides the underlying framework to guide one’s performance.  We use the framework to provide feedback to help the learner understand why their behavior was wrong, both in the learning experience and ideally past the learning experience the learner uses the model to continue to develop their performance.  Except that, too often, we don’t provide the concept in a useful way.

What we too often see is a presentation of a rote procedure, without the underlying justification.  In business, we’ll teach a process.  In software, we’ll see feature/function presentations (literally going item by item through the menus!).  We’ll see tutorials to achieve a particular goal without presenting an underlying model.  And that’s broken.

We need models! The reason why is that people create mental models to explain the world.  People aren’t very good at remembering rote things (our brains are really good at pattern matching, but not rote memorization).  We can fake it, but it’s just crazy to have people memorize rote things unless it’s something we have to absolutely know cold (medical terminology is an example, as are emergency checklists for flights).  By and large, very little of what we need to know needs to be memorized.

Instead, what people need are models.  Models are powerful, because they have explanatory and predictive power.  If you forget a step in a procedure, but know the model driving the performance, you can regenerate the missing step.  With software, for instance, if you present the model, and several examples where the way to do something is derived from the model, and then you have the learner use inferences from the model to do a couple of tasks, you might be saved from having to present the whole system.

People will build models, so if you don’t give them one, it’s quite likely that the one they do build will be wrong.  And bad models are very hard to extinguish, because we patch them rather than replace them.  It requires more responsibility on the designer to get the model, as, for reasons mentioned before, our SMEs may not be able to help us, but get them we must.  Realize that every procedure, software, or behavior has a model that drives the reason why it should be done in a particular way, and find it. Then we need to communicate it.

Multiple models help! To communicate a model most effectively, we should communicate it in several ways.  Models are more memorable than rote material, but we need to facilitate internalization.  Prose is certainly one tool we can and should use (carefully, it’s way too easy to overwrite), but we should look at other ways to communicate it as well.

Multiple representations help in several ways.  First, they increase the likelihood that a learner will comprehend the model, and then have a path to comprehend the other representations.  Second, the multiple representations increase the number of paths to activate a model in a relevant context.  Finally, multiple representations increase the likelihood that one can map closely to the problem and facilitate a solution.

Multiple representations are, unfortunately, sometimes difficult to generate (more so than finding the original model).  However, we should always be able to at least generate a diagram.  This is because the model should have conceptual relationships, and these can be mapped to spatial relationships.  There’s some creativity involved, but that’s the fun part anyways!

Yes, doing good instructional design does take more work, but anything worth doing is worth doing well.  On a related, but important, note, unfortunately the difference between broken ID and good ID is subtle.   You may have to explain it (I have literally had to), but if you know what you’re doing and why, you should be able to.  And having developed a powerful representation increases the power, and success of the learning, and consequently the performance.  Which is, of course, our goal. So, go forth and conceptualize!

12 February 2009

On the road again

Clark @ 12:58 PM

Well, this spring is shaping up differently than I expected. Instead of the doing the familiar talks or workshops in the usual places: Training’s Conference, eLearning Guild’s Annual Gathering, and ASTD’s TechKnowledge and International Conference, I’m doing new things in old and new places.  Not that I don’t like those conferences, in fact I recommend them, it’s just that life takes funny turns (and I like challenging myself). Which isn’t to say I won’t be at those conferences again (I hope and intend to).

So, where will I be showing up?  At VizThink, for one.  A conference I’ve been very interested in, and managed to get a chance to present at.  That’s really just in a few days (Feb 22-25), and I’ll be talking about the cognitive underpinnings behind diagrams (and more).  As well as soaking up some great thoughts from others!

I’ll also be talking at the 5th Annual Innovations in eLearning Conference, hosted by the Defense Acquisition and George Mason Universities in the beginning of June.  My topic is myths about new learners, and I intend to debunk much of the hype just as I like to do around learning styles (which will probably show it’s head in the talk), as well as provide practical guidelines.  Folks like Will Wright and Vint Cerf are keynoting, so this is bound to be special.

Finally, assuming there are enough registrations, I will be at ASTD’s ICE (end of May), not speaking but running a pre-conference workshop on elearning strategy.  This is based upon my chapter in the forthcoming Michael Allen’s eLearning Annual 2009 about both the important principles of elearning tactics like mobile, portals, social learning, and more, and tying those tactics together into a strategy.  The focus is on an integrated ‘performance ecosystem‘, and I reckon it’s the most useful thing I can offer in this economic uncertainty.  I’ve given it as a talk before, but not as a workshop, and this is for managers and executives to take the next step in improving their organizational learning infrastructure.  It’s time to work smarter, folks!

One of the ways I work smarter and keep learning is to push myself into new areas that are beyond my comfort zone but that are within my reach (e.g. Vygotsky’s Zone of Proximal Development).  I recommend it to you too.  It’s a way to keep learning, and expanding.  I welcome new challenges, got any handy?

10 February 2009


Clark @ 3:06 PM

We recently finished watching a video series called Kamichu (we like anime).  It’s a remarkably cute series about a middle school girl who finds out she’s a god (apparently the Shinto belief system). There are some subtle digs at cultural artifacts like politicians, sweet explorations of the difficulties of romance, and funny running gags.  I recommend it, but the thoughts it prompted are what I’m talking about here.

One of the interesting things about the show is it’s speed.  Each episode unfolds at it’s own leisurely pace, with soft musical backgrounds, and no laugh tracks.  Our (only recently) Disney-watching kids, now experienced with laugh tracks and frantic pacing, were enchanted.  It made me think about taking time to develop an atmosphere, the time taken to really develop a mood.  Good movies do that, though less and less.

I’d recently been reflecting on pacing in music as well, regarding Pink Floyd. They similarly take the time to build the tension to make their musical flourishes.  As did the landmark Who’s Next Album.  (Ok, so my musical tastes indicate my age.  Still, the pacing matters.)

Serendipitously, I also just read an intriguing post about the history of addiction.  It starts off talking about how we used to listen to music, hearing our favorite pieces only infrequently, and likely badly.  Similarly, getting together for conversations and fun was time-consuming.  The post then goes on to cover the rise, and fall, of opiates (legal for many years), and finally suggests that technology is our new addiction, and that we still haven’t figured out what’s now appropriate with technology or not.  It’s long, but very interesting.

I’ve gone off before about slow learning, and I think this is another facet.   Not only are we’re rushing too much in our performance, our development processes, and the amount of time we devote to learning, we’re not properly setting the stage.  I’ve been quick myself, but some of the best speakers seem to take their time getting to the point.  I think there’s a lot to process here, and perhaps a lot to learn.  We’ve less patience, and I think that it’s affecting our confidence to take time to do things properly.  If we don’t, we risk it not working. If we do take our time, we run the risk of costing a bit more money.

In business, increasingly, I think we need to slow down and think a little, and the end result will end up being at least as fast, but also better quality.  I think that’s the wise decision, what do you think?

Next Page »

Powered by WordPress