Learnlets

Secondary

Clark Quinn’s Learnings about Learning

What it takes

3 March 2009 by Clark Leave a Comment

Over at the TogetherLearn site, I’ve added a post about ‘what it takes‘.   I guess it’s the ‘how do we make this work’ in me, but I wanted to wrap some concrete definition about their ‘the future of   the training department‘.   I very much agree with their view, but was concerned it could be viewed as too difficult.

Note that they are largely talking about a move to a self-help environment, as I discussed in my last post on the training department of the future.   I reckon, however, that a truly deep revisit and rethink will look at formal learning, portals, content governance and more, as well as the social learning component.   Still, the process is largely the same, it’s just that the scope is larger.   Just doing the social component, however, is likely to be the best short-term investment to get large benefits from a small step.

Realize that the roadmap isn’t going to be as specific as might be desired, but it helps to take an objective look from an experienced perspective and at least line up some near-term goals as well as some long-term desires, and figure out some steps that will take you there.   I can’t see an alternative, can   you?

Workplace Learning in 10 years?

2 March 2009 by Clark 3 Comments

This month’s Learning Circuit’s blog Big Question is “What will workplace learning look like in 10 years”.   Triggered by Jay & Harold’s post and reactions (and ignoring my two related posts on Revisiting and Learning Design), it’s asking what the training department might look like in 10 years.   I certainly   have my desired answer.

Ideally, in 10 years the ‘training department’ will be an ‘organizational learning’ group, that’s looking across expertise levels and learning needs, and responsible for equipping people not only to come up to speed, but to work optimally, and collaborate to innovate.   That is, will be responsible for the full performance ecosystem.

So, there may still be ‘courses’, though they’ll be more interactive, more distributed across time, space, and context.   There’ll be flexible customized learning paths, that will not only skill you, but introduce you into the community of practice.

Learning/Information/Experience DesignHowever, the community of practice will be responsible for collaboratively developing the content and resources, and the training department will have morphed into learning facilitators: refining the learning, information, and experience design around the community-established content, and also facilitating the learning skills of the community and it’s members.   The learning facilitators will be monitoring the ongoing dialog and discussions, on the lookout for opportunities to help capture some outcomes, and watching the learners to look for opportunities to develop their abilities to contribute.   They’ll also be looking for opportunities to introduce new tools that can augment the community capabilities, and create new learning, communication, and collaboration channels.

Their metrics will be different, not courses or smile sheets, but value added to the community and it’s individuals, and impact on the ability of the community to be effective.   The skill sets will be different too: understanding not just instructional but information and experience design, continually experimenting with tools to look for new augmentation possibilities, and having a good ability to identify and facilitate the process of knowledge or concept work, not just the product.

10 years from now the tools will have changed, so it may be that some of the tasks can be automated, e.g. mining the nuggets from the informal channels, but design & facilitation will still be key.   We’ll distribute the roles to the tools, leaving the important pattern matching to the facilitators.

At least, that’s what I hope.

Monday Broken ID Series: Perfect Practice

1 March 2009 by Clark 1 Comment

Previous Series Post | Next Series Post
This is one in a series of thoughts on some broken areas of ID that I‘m posting for Mondays.   I intend to provide insight into many ways much of instructional design fails, and some pointers to avoid the problems. The point is not to say ‘bad designer‘, but instead to point out how to do good design.

Really, the key to learning is the practice. Learners have to apply knowledge, in the form of skills, to really internalize and ‘own‘ the learning.   Knowledge recitation, in the absence of application, leads to what cognitive science calls ‘inert knowledge‘, that‘s able to be recited back, but isn‘t activated in appropriate contexts.

What we see, unfortunately, is too much of knowledge test, and not meaningful application. We see meaningless questions seeing if people can recite back memorized facts, and no application of those facts to solve problems.   We see alternatives to the right answer that are so obviously wrong that we can pass the test without learning anything!   And we see feedback that‘s not specific to the deficit.   In short, we waste our and the learner‘s time.
What we want is appropriate challenge, contextualized performance, meaningful tasks, appropriate feedback, and more.

First, we should have picked meaningful objectives that indicate what they can do, in what context, to what level, and now we design the practice to determine whether they can do it. Of course, we may need to have some intermediate tasks to develop their skills at an appropriate pace, providing scaffolding to simplify the task until it‘s mastered.

We can scaffold in a variety of ways. We can provide tasks with simplified data first, that don‘t get complicated with other factors.   We can provide problems with parts worked, so learners can accomplish the component skills separately and then combine. We can provide support tools such as checklists or flowcharts to assist, and gradually remove them until the learner is capable.

We do need to balance the level of challenge, so that the task gets difficult at the right rate for the learner: too easy, and the learner is bored; too hard and the learner is frustrated.   Don‘t make it too easy!   If it matters, ensure they know it (and if it doesn‘t, why are you bothering?).

The trick is not only the inherent nature of the task, but many times is a factor of the alternatives to the right answer.   Learners don‘t make random mistakes (generally), they make patterned mistakes that represent inappropriate models that they perceive as appropriate.   We should choose alternatives to the right answer or choice that represent these misconceptions.

Consequently, we need to provide specific feedback for that particular misconception.   That‘s why any quiz tool that only has one response for all the wrong answers should be tossed out; it‘s worthless.

We need to ensure that the setting for the task is of interest to the learner.   The contexts we choose should setup problems that the learner viscerally understands are important problems, and ones that they are interested in.
We also need, as mentioned with examples, that the contexts seen across both examples and practice determine the space of transfer, so that still needs to be kept in mind.

The elements listed here are the elements that make effective practice, but also those that make engaging experiences (hence, the book).   That is, games.   While the best practice is individually mentored real performance, that doesn‘t scale well, and the consequences can be costly.   The next best practice, I argue, is simulated performance, tuned into a game (not turned, tuned).   While model-driven simulations are ideal for a variety of reasons (essentially infinite replay, novelty, adaptive challenge), it can be simplified to branching or linear scenarios.   If nothing else, just write better multiple choice questions!

Note that, here, practice encompasses formative and summative assessment. In either case, the learner‘s performing, it‘s just whether you evaluate and record that performance to determine what the learner is capable of.   I reckon assessment should always be formative, helping the learner understand what they know. And summative assessment, in my mind, has to be tied back to the learning objectives , seeing if they can now do what they need to be able to do that‘s difference.

If you make meaningful challenging, contextualized performance, you make effective practice.   And that‘s key to behavior change, and learning.   So practice making perfect practice, because practice makes perfect.

Designing Learning

28 February 2009 by Clark 2 Comments

Another way to think about what I was talking about yesterday in revisiting the training department is taking a broader view.   I was thinking about it as Learning Design, a view that incorporates instructional design, information design and experience design.

leiI‘m leery of the term instructional design, as that label has been tarnished with too many cookie cutter examples and rote approaches to make me feel comfortable (see my Broken ID series).   However, real instructional design theory (particularly when it‘s cognitive-, social-, and constructivist-aware) is great stuff (e.g. Merrill, Reigeluth, Keller, et al); it‘s just that most of it‘s been neutered in interpretation.   The point being, really understanding how people learn is critical.   And that includes Cross‘ informal learning.   We need to go beyond just the formal courses, and provide ways for people to self-help, and group-help.

However, it‘s not enough.   There‘s also understanding information design.   Now, instructional designers who really know what they‘re doing will say, yes, we take a step back and look at the larger picture, and sometimes it‘s job aids, not courses.   But I mean more, here.   I‘m talking about, when you do sites, job aids, or more, including the information architecture, information mapping, visual design, and more, to really communicate, and support the need to navigate. I see reasonable instructional design undone by bad interface design (and, of course, vice-versa).

Now, how much would you pay for that? But wait, there‘s more!   A third component   is the experience design.   That is, viewing it not from a skill-transferral perspective, but instead from the emotional view.   Is the learner engaged, motivated, challenged, and left leaving fulfilled?   I reckon that‘s largely ignored, yet myriad evidence is pointing us to the realization that the emotional connection matters.

We want to integrate the above.   Putting a different spin on it, it‘s about the intersection of the cognitive, affective, conative, and social components of facilitating organizational performance.   We want the least we can to achieve that, and we want to support working alone and together.

There‘s both a top-down and bottom-up component to this.   At the bottom, we‘re analyzing how to meet learner needs, whether it‘s fully wrapped with motivation, or just the necessary information, or providing the opportunity to work with others to answer the question.   It‘s about infusing our design approaches with a richer picture, respecting our learner‘s time, interests, and needs.

At the top, however, it‘s looking at an organizational structure that supports people and leverages technology to optimize the ability of the individuals and groups to execute against the vision and mission.   From this perspective, it‘s about learning/performance, technology, and business.

And it‘s likely not something you can, or should, do on your own.   It‘s too hard to be objective when you‘re in the middle of it, and the breadth of knowledge to be brought to bear is far-reaching.   As I said yesterday, what I reckon is needed is a major revisit of the organizational approach to learning.   With partners we‘ve been seeing it, and doing it, but we reckon there‘s more that needs to be done.   Are you ready to step up to the plate and redesign your learning?

Revisiting the Training Department

27 February 2009 by Clark 1 Comment

Harold Jarche and Jay Cross have been talking about rethinking the training department, and I have to agree.   In principle, if there is a ‘training‘ department, it needs to be coupled with a ‘performance‘ department and a ‘social learning‘ department, all under an organizational learning & performance umbrella.

What‘s wrong with a training department?   Several things you‘ll probably recognize: all problems have one answer – ‘a course‘; no relationships to the groups providing the myriad of portals, no relationship to anyone doing any sort of social learning, no ‘big picture‘ comprehension of the organization‘s needs, and typically the courses aren‘t that great either!

To put it another way, it‘s not working for the organizational constituencies.   The novices aren‘t being served because the courses are too focused on knowledge and not skills, aren‘t sufficiently motivating to engage them, and use courses even when job aids would do.   The practitioners are not getting or able to find the information they need, and have trouble getting access to expert knowledge.   And experts aren‘t able to collaborate with each other, and to work effectively with practitioners to solve problems.   Epic fail, as they say.   OK, so that‘s a ‘straw man‘, but I‘ll suggest that it‘s all too frequent.

The goal is a team serving the entire learnscape: looking at it holistically, matching needs to tools, nurturing communities, leveraging content overlap, and creating a performance-focused ecosystem.   I‘ve argued before that such an approach is really the only sustainable way to support an organization.   However, that‘s typically not what we see.

Instead, we tend to see different training groups making courses in their silos, with no links between their content (despite the natural relationships), often no link to content in portals, no systematic support for collaboration, and overall no focus on long-term development of individuals and capabilities.

So, how do we get there from here?   That‘s not an easy answer, because (and this isn‘t just consultant-speak) it depends on where the particular organization is at, and what makes sense as a particular end version, and what metrics are meaningful to the organization.   There are systematic ways to assess an organization (Jay, Harold, and I‘ve drafted just such an instrument), and processes to follow to come up with recommendations for what you do tomorrow, next month, and next year.

The goal should be a plan, a strategy, to move towards the goal.   The path differs, as the starting points are organization-specific. One way to do it is DIY, if you‘ve got the time; it‘s cheaper, but more error-prone.   The fast track is to bring in assistance and take advantage of a high-value, lightweight infusion of the best thinking to set the course.   No points for guessing my recommendation.   But with the economic crisis and organizational ‘efficiencies‘, can you afford to stick to the old ineffective path?

This time, it’s personal…

25 February 2009 by Clark 11 Comments

So on the way to dinner, my son told me on Friday that he’d tied a guy’s shoes together (the kid fell down when he tried to get up at the end of class, and was late to the next).   I asked, and this was a) a friend, b) a prank (the latest volley in an ongoing series),   c) the boy wasn’t hurt,   but d) was amused.   Unacceptable, still.   It was potentially dangerous, interfered with school operations, and consequently inappropriate. I chided him to that effect, and thought no more about it.   Until my wife let me know Monday night what the school administration had done as a consequence.

Three teachers, together, had reported it, not one of them talking to my son directly.   So he was called into the office, and the Vice Principal who handled it decided on lunch-time detention for two days, at a special table in the cafeteria.   We weren’t involved until afterwards, when my wife heard about it, and then talked to the VP on the second day.   OK, what he did wasn’t the smartest thing to do, and we absolutely believe that consequences are an appropriate response.   As my wife said, 95% of the time she’ll side with the teachers (her dad was one). So it’s not that there was a response, it’s just what the response was.   Our issue is with the process used, and the punishment.

Let’s start that he’s a good kid, who gets good grades because it’s expected of him, despite the fact that the current school situation is such that the content is dull, and the homework staggering (he’s opting out of sports because he doesn’t feel he has the time).   He’s bored at school, as the work’s too easy for him, and the repetitive drill is mind numbing.   However, no argument, his action wasn’t acceptable. In his case, being called to the office at all was probably enough, as the only previous time he’d been was to recognize him for something good he did.   Having a talking to,   for a first infraction, likely would weigh on him enough.   Some time for reflection and even writing an apology to the friend or the teachers   or just a treatise on the folly of the act would be rehabilitative, useful, and understandable. Instead, we have a punitive action.   “You’re bad, and we need to punish you.”

My wife talked to the VP, trying to point out that while intervention was certainly called for, public humiliation wasn’t. The VP denied that it was public, saying that the table is off to the side.   Yes, in the same room, and obviously the location of the ‘bad kids’.   As my son told us, a number of his friends walked by and commented.   I’m not buying it; it’s public humiliation, and that doesn’t make sense as a first recourse (if ever), particularly in a case of behavior that was bad judgment, not malicious.

So either I’m over-reacting, or the process they applied (the teachers not talking to him about it), and the result it came up with (public humiliation for a first offense) is broken.   While I admit it’s hard to be objective, I’m inclined to believe the latter.   Shouldn’t we be using misbehavior as opportunities to show how to respond appropriately?   We may have societally moved away from rehabilitation in our penal system, but in our education system?   What’s his lesson here?   I mean, we don’t put people in the stocks anymore!   Though I’m tempted, with a certain VP.   Of course, showing up (albeit it unnamed) in a blog post may be the same, eh?

Monday Broken ID Series: Examples

22 February 2009 by Clark 2 Comments

Previous Series Post |Next Series Post

This is one in a series of thoughts on some broken areas of ID that I‘m posting for Mondays.   I intend to provide insight into many ways much of instructional design fails, and some pointers to avoid the problems. The point is not to say ‘bad designer‘, but instead to point out how to do good design.

I see several reliable problems with examples, and they aren‘t even the deepest problems. They tend to be mixed in with the concept, instead of separate, if they exist at all.   Then, when they do exist, too often they‘re cookie-cutter examples, that don‘t delve into the necessary elements that make examples successful, let alone are intrinsically interesting, yet we know what these elements are!

Conceptually, examples are applications of the concept in a context.   That is, we have a problem in a particular setting, and we want to use the model as a guide to solving the problem. Note that the choice of examples is important. The broader the transfer space, that is, the more general the skills, the more you want examples that differ in many respects.   Learners generalize the concept from the examples, and the extent to which they‘ll generalize to all appropriate situations depends on the breadth of contexts they‘ve seen (across both examples and practice).   You need to ensure that the contexts the learner sees are as broadly disparate as possible.

Note that we should also be choosing problems and contexts that are of interest to the audience.   Going beyond just the cognitive role, we should be trying to tap into the motivational and engagement factors.   Factor that into the example design as well!

Now, we know that examples have to show the steps that were taken.   They have to have specific steps from beginning to end.   And, I add, those steps have to refer back to the concept that guides the presentation.   You can‘t just say “first you do this, then you do this”, etc, you have to say “first, using the model, you do this, and then the model says to do that”.   You need to show the steps, and the intermediate work products.   Annotating them is really important.

And that annotation is not just the steps, but also the underlying thought processes.   The problem is, experts don‘t even have access to their thought processes anymore!   Yet, their thinking really works along lines like “well, I could‘ve done A, but because of X, and thought B was a better approach, and then I could do C, but because of Y I tried D”, etc.   The point being, there‘s a lot of contextual clues that they evaluate that aren‘t even conscious, yet these clues are really important for learners. (BTW, this is one of the many reasons I recommend comics in elearning, thought bubbles are great for cognitive annotation.)

Another valuable component is showing mistakes and backtracking. This is a hard one to get your mind around, and yet it‘s powerful both cognitively and emotionally.   First, experts model the behavior perfectly, and when learners try, they make mistakes, and may turn off emotionally (“I‘m having trouble, and it looks so easy, I must not be good at this”).   In reality, experts make mistakes all the time, and learners need to know that. It keeps you from losing them altogether!

Cognitively it‘s valuable, too.   When experts show backtracking and repair, they‘re modeling the meta-skills that are part of the expertise.   Unpacking that self-monitoring helps learners internalize the ‘check your answer‘ component that‘s part of expert performance.   This takes more work on the part of the designer, like we had with the concept, but if the content is important (otherwise, why are you building a course), it‘s worth doing right.

Finally, I believe it‘s important to convey the example as a story.   Our brains are wired to comprehend stories, and a good narrative has better uptake.   Having a protagonist documenting the context and problem, and then solving it with the model to achieve meaningful outcomes, is more interesting, and consequently more memorable.   We can use a variety of media to tell stories, from prose, through audio (think mobile and podcasts) and narrated slideshow, animation, or video.   Comics are another channel.   Stories also are useful for conveying the underlying thought processes, via thought bubbles or reflective narration (“What was I thinking?…”).

So, please do good examples.   Be exemplary!

Strategy, strategically

21 February 2009 by Clark Leave a Comment

In addition to working on the technology plan for my school district, I’ve also been assisting a not-for-profit trying to get strategic about technology.   The struggles are instructive, but looking across these two separate instances as well as the previous organizations I’ve assisted, I’m realizing that there are some common barriers.

The obvious one is time. The old saying about alligators and draining the swamp is too true, and it’s only getting worse.   Despite an economic stimulus package for the US and other countries, and (finally) a budget in my own state, things are not likely to get better soon.   Even if companies could hire back everyone they’ve laid off, the transition time would be significant.   It’s hard to sit back and reflect when you’re tackling more work with less resources.   Yet, we must.

The second part is more problematic.   Strategic thinking isn’t easy or obvious, at least to all.   For some it’s probably in their nature, but I reckon for most it takes a breadth of experience and an ability to abstract from that experience to take a broader perspective.   Abstraction, I know from my PhD research on analogy, isn’t well done without support.   Aligning that perspective with organizational goals simultaneously adds to the task.   Doing it keeping both short- and long-term values, for several different layers of stakeholders, and you’re talking some serious cognitive overhead.

We do need to take the time to be strategic.   As I was just explaining on a call, you don’t want to be taking small steps that aren’t working together towards a longer-term goal.   If you’re investing in X, and Y, and Z, and each one doesn’t build on each other, you’re missing an opportunity. If you’ve alternatives A & B, and A seems more expedient, if you haven’t looked to the future you might miss that B is a better long term investment.   If you don’t evaluate what else is going on, and leverage those initiatives because you’re just meeting your immediate needs, you’re not making the best investment for the organization, and putting yourself at risk.   You need to find a way to address the strategic position, at least for a percentage of your time (and that percentage goes up with your level in the organization).

To cope, we use frameworks and tools to help reduce the load, and follow processes to support systematicity and thoroughness. The performance ecosystem framework is one specific to use of technology to improve organizational learning, innovation, and problem-solving, but there are others.   Sometimes we bring in outside expertise to help, as we may be too tightly bound to the context and an external perspective can be more objective.

You can totally outsource it, to a big consulting company, but I reckon that the principle of ‘least assistance‘ holds here too.     You want to bring in top thinking in a lightweight way, rather than ending up with a bunch of interns trying to tie themselves to you at the wrist and ankles.   What can you do that will provide just the amount of help you need to make progress?   I have found that a lightweight approach can work in engagements with clients, so I know it can be done.   Regardless, however of wWhether you do it yourself, with partners, or bring in outside help, don’t abandon the forest for the trees, do take the time.   You need to be strategic, so be strategic about it!

The ‘Least Assistance’ Principle

20 February 2009 by Clark 10 Comments

While I agree vehemently with most of a post by Lars Hyland, he said one thing I slightly disagree with, and I want to elaborate on it.   He was disagreeing with   “buying rapid development tools to bash out ill formed ‘e-learning’ to an audience that will not only be unimpressed but also none the wiser – or more productive”, a point I want to nuance.   I agree with not using rapid elearning to create courses for novices, but there is a role for bashing out courses for another audience, the practitioner.   And there’s something deeper here to tease out.

I want to bring up John Carroll’s minimalist instruction, and highly recommend it to you. He focused on a) meaningful tasks, b) active learning quickly, c) including error recogition & recovery, and d) making learning activities self-contained (a lot like games, actually).   In The Nurnberg Funnel, he documented how this design led to 25 cards, 1 per learning goal, that beat a 94 page traditionally designed manual hands-down in outcomes.

Another way to think about it is something Jim Spohrer mentioned to me once. Now, Jim’s been an Apple Fellow, and is leading research at IBM’s Almaden Research Center.   He really cares and likes to help people, but he’s very busy.   So he adopted a ‘least assistance’ principle, where he would ask himself what’s the least he can do to get this person going, because there was more to do and more people to help than he was able to keep up with.   And I think it is a useful way to think about supporting learning.

This sounds a lot like performance support, and that’s definitely a mind-set we need to adopt. When Harold Jarche and Jay Cross talk about the death of the training department, they’re talking about not focusing on courses, and instead taking a broader, performance perspective.   Obviously, we want to talk about portals of resources, but we also need to recognize that there are formal learning situations that don’t require the full formality.

We develop full courses to incorporate motivation, practice, all the things non-self-directed learners need.   But there are times when we need to provide new information and skills to self-directed learners.   When we’re talking to practitioners who are good at their job, know what they’re doing and why, and know that they need to know this information and how they’ll apply it, we can strip away a lot of the window dressing. We can just provide support to a SME so that their talk presents the relevant bits   in a streamlined and effective way, and let them loose.     That, to me, is the role of rapid elearning.

It’s not for novices, but it’s effective, and more efficient.   In this economic climate, we don’t have the luxury of full development of courses for every need.   Moreover, in any climate, we shouldn’t give people what they don’t need, instead we need to focus on what the ‘least assistance’ we can give them is.

In many cases, the least assistance we can give is self-help, which is why I believe social learning tools are one of the best investments that can be made.   The answer may well be ‘out there’, and rather than for learning designers to try to track it down and capture it, the learner can send out the need   and there’s a good chance an answer will come back!   There’s a lot to making such an environment work; it’s not the case that ‘if you build it, they will learn’, but it’s still going to fill a sweet spot in the performance ecosystem that may not be being hit as of now.

Don’t look for everything you can do in one situation, unless you’re flush with too much time and resources (in which case, watch out!), instead look for the least you can do that will get the job done so you can do more for everybody. It’s likely that’s more to their taste, anyway. And that’s enough from me on that!

Measuring the right things

18 February 2009 by Clark 7 Comments

For sins in my past, I’ve been invited on to our school district’s technology committee.   So, yesterday evening I was there as we were reviewing and rewriting the technology plan (being new to the committee, I wasn’t there when the existing one was drafted).   Broken up into five parts, including curriculum, infrastructure, funding, I was on the professional development section, with a teacher and a library media specialist.   Bear with me, as the principles here are broader than schools.

The good news: they’d broken up goals into two categories, the teacher’s tech skills, and the integration of tech into the curriculum. And they were measuring the tech skills.

The bad news: they were measuring things like percentage of teachers who’d put up a web page (using the district’s licensed software), and the use of the district’s electronic grading system. And their professional development didn’t include support for revising lesson plans.

Houston, we have some disconnects!

So, let’s take a step back.   What matters?   What are we trying to achieve?   It’s that kids learn to use technology as a tool in achieving their goals: research, problem-solving, communication.   That means, their lessons need to naturally include technology use.   You don’t teach the tool, except as ancillary to doing things with it!

What would indicate we were achieving that goal?   An increase in the use of lesson plans that incorporate technology into non-technology topics would be the most direct indicator.   Systematically, across the grade levels.   One of the problems I’ve seen is that some teachers don’t feel comfortable with the technology, and then for a year their students don’t get that repeated exposure.   That’s a real handicap.

However, teacher’s lesson plans aren’t evaluated (!).   They range from systematic to adhoc.   The way teachers are evaluated is that they have to set two action research plans for the year, and they take steps and assess the outcomes (and are observed twice), and that constitutes their development and evaluation.   So, we determined that we could make one of those action research projects focus on incorporating technology (if, as the teacher in our group suggested, we can get the union to agree).

Then we needed to figure out how to get teachers the skills they need.   They were assessed on their computer skills once a year, and courses were available.   However, there was no link between the assessment and courses.   A teacher complained that the test was a waste of time, and then revealed that it’s 15-30 minutes once a year.   The issue wasn’t really the time, it’s that the assessment wasn’t used for the teachers.

And instead of just tech courses, I want them to be working on lesson plans, and, ideally, using the tools to do so.   So instead of courses on software, I suggested that they need to get together regularly (they already meet by grade level, so all fifth grade teachers at a school meet together once a week) and work together on new lesson plans.   Actually, I think they need to dissect some good examples, then take an existing lesson plan and work to infuse it with appropriate technology, and then move towards creating new lesson plans.   To do so, of course, they’ll need to de-emphasize something.

Naturally, I suggested that they use wikis to share the efforts across the schools in the district, but that’s probably a faint hope.   We need to drive them into using the tools, so it would be a great requirement, but the level of technology skills is woefully behind the times.   That may need to be a later step.

One of the realizations is that, on maybe a ten-year window, this problem may disappear: those who can’t or won’t use tech will retire, and the new teachers will have it by nature of the culture.   So it may be a short-term need, but it is critical.   I can’t help feeling sorry for those students who miss a year or more owing to one teacher’s inability to make a transition.

At the end, we presented our results to the group.   We’ll see what happens, but we’ve a new coordinator who seems enthusiastic and yet realistic, so we’ll see what happens.   Fingers crossed! But at least we’ve tried to show how you could go towards important goals within the constraints of the system.   What ends up in the plan remains to be seen, but it’s just a school-level model of the process I advocate at the organizational level.   Identify what the important changes are, and align the elements to achieve it (a bit like ID, really).   If you’re going to bother, do it right, no?

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok