Learnlets

Secondary

Clark Quinn’s Learnings about Learning

The worst of best practices and benchmarking

5 October 2009 by Clark 4 Comments

In a recent post, Jane Bozarth goes to task on ‘best practices’, which I want to elaborate on.   In the post, she talks about how best practices are contextualized, so that they may work well here, but not there.   She’s got a cute and apt metaphor with marriage, and she’s absolutely right.

However, I want to go further.   Let me set the stage: years ago as a grad student, our lab was approached with the task of developing an expert system for a particular task.   It certainly was something we could have done.   Eventually, we asked what the description was for the ideal performance, and were told that the best source was the person who’d been doing it the longest.   Now, people are fabulous pattern matchers, and performing something for a long time with some reflection on improvement likely could get you some really good performance. However, there are some barriers: experts no longer have access to their own performance; without an external frame of reference, they can get trapped into local maxima; and other phenomena of our cognitive architecture interfere with optimal performance (e.g. set effects, functional fixedness).   I’ve riffed on this often; it’s compiled and they tell stories about what they do that have little correlation to what they actually do. We didn’t end up taking up the opportunity.   So it may be the best out there, but is it the best that can be?

And that’s the problem.   Why are we only looking at what the best is that anyone’s doing?   Why not abstract across that and other performances, looking for emergent principles, and trying to infer what would on principle be the best?   That is, if it hasn’t already been documented in theory and is available (academics do that sort of thing as a career, and in between the obfuscation there are often good thoughts and answers).   The same with benchmarking: it’s relatively the best, not absolutely the best.

I’ve largely made a career out of trying to find the principled best approaches, interpreting cognitive science research and looking broadly across relevant fields (including HCI/UI, software engineering, entertainment, and others) to find emergent principles that can guide design of solutions.   And, reliably, I find that there are idea, concepts, models, etc that can guide efforts as broadly dispersed as virtual worlds, mobile, adaptive systems, content models, organizational implementation, and more.   Models emerge that serve as checklists, principles, frameworks for design that allow us to examine tradeoffs and make the principled best solution.   I regularly capture these models and share them (e.g. my models page, and more recent ones regularly appear in this blog).

I’m not saying it’s easy, but you look across our field and recognize there are those who are doing good work in either translating research into practice or finding emergent patterns that resonate with theoretical principles.   It’s time to stop looking at what other organizations are doing in their context as a guide, and start drawing upon what’s known and customizing it to   your context, and then having a cycle of continual tuning. With the increasing pressures to be competitive, I’d suggest that just being good enough isn’t.   Being the best you can be is the only sustainable advantage.

Let’s see: copy your best competitor, and keep equal; or shoot for the principled best that can be in the category, and have an unassailable position of leadership?   The answer seems obvious to me.   How about you?

Seed, feed, & weed

17 September 2009 by Clark 12 Comments

In my presentation yesterday, I was talking about how to get informal learning going.   As many have noted, it’s about moving from a notion of being a builder, handcrafting (or mass-producing) solutions, to being a facilitator, nurturing the community to develop it’s own capabilities.   Jay Cross talks about the learnscape, while I term it the performance ecosystem. The point, however, is from the point of the view of the learner, all the resources needed are ‘to hand’   through every stage of knowledge work. Courses, information resources, people, representational tools, the ability to tap into the 4 C’s (create, contextualize, connect, co-create).

Overall, it taps into our natural learning, where we experiment, reflect, converse, mimic, collaborate, and more.   Our approach to formal learning needs to more naturally mimic this approach, having us attempting to do something, and resourcing around it with information and facilitation.   Our approach to informal learning similarly needs to reflect our natural learning.

Networks grow from separate nodes, to a hierarchical organization where one node manages the connections, but the true power of a network is unleashed when every node knows what the goal is and the nodes coordinate to achieve it.   It is this unleashing of the power of the network that we want to facilitate.   But if you build it, they may not come.

Networks take nurturing.   Using the gardener or landscaper metaphor,   yesterday I said that networks need seeding, feeding, and weeding.   What do I mean?   If you want to grow a network, you will have to:

Seed: you need to put in place the network tool, where individuals can register, and then create the types of connections they need.   They may self-organize around roles, or tasks, or projects, or all of the above.   They may need discussion forums, blogs, wikis, and IM.   They may need to load, tag, and search on resources.   You likely will need to preload it with resources, to ensure there’s value to be found.   And you’ll have to ensure that there are rewards for participating and contributing.   The environment needs to be there, and they have to be aware.

Feed: you can’t just put in place, you have to nurture the network.   People have to know what the goals are and their role.   Don’t tell them what to do, tell them what needs done.   You may need to quietly ‘encourage’ the opinion makers to participate.   And the top of the food chain needs to not only anoint the process, but model the behavior as well.   The top level of the group (ie not the CEO, but the leader of whatever group you’ve chosen to facilitate) needs to be active in the network.   You may need to highlight what other people have said, elicit questions and answers, and take a role both within and outside the network to get it going.     You may have to go in and reorganize the resources, take what’s heard and make it concrete and usable. You’ll undoubtedly have to facilitate the skills to take advantage of the environment.   And you have to ensure there’s value there for them.

Weed: you may have to help people learn how to participate.   You may well find some inappropriate behavior, and help those learn what’s acceptable. You’ll likely have to develop, and modify, policies and procedures.   You may have to take out some submitted resources and revise them for better usability.   You may well have to address cultural issues that arise, when you find that participation is stunted by a lack of tolerance of diversity, no openness to new ideas, no safety for putting ideas out, and other factors that facilitate a learning organization.

However, if you recognize that it will take time and tuning, and diligently work to nurture the network, you should be able to reap the benefits of an aligned group of empowered people.   And those benefits are real: innovation, problem-solving, and more, and those are the key to organizational competitiveness going forward. Ready to get grubby?

Driving formal & informal from the same place

8 September 2009 by Clark 4 Comments

There’s been such a division between formal and informal; the fight for resources, mindspace, and the ability for people to get their mind around making informal concrete.   However, I’ve been preparing a presentation from another way of looking at it, and I want to suggest that, at core, both are being driven from the same point: how humans learn.

I was looking at the history of society, and it’s getting more and more complex. Organizationally, we started from a village, to a city, and started getting hierarchical.   Businesses are now retreating from that point of view, and trying to get flatter, and more networked.

Organizational learning, however, seems to have done almost the opposite. From networks of apprenticeship through most of history, through the dialectical approach of the Greeks that started imposing a hierarchy, to classrooms which really treat each person as an independent node, the same, and autonomous with no connections.

Certainly, we’re trying to improve our pedagogy (to more of an andragogy), by looking at how people really learn.   In natural settings, we learn by being engaged in meaningful tasks, where there’re resources to assist us, and others to help us learn. We’re developed in communities of practice, with our learning distributed across time and across resources.

That’s what we’re trying to support through informal approaches to learning. We’re going beyond just making people ready for what we can anticipate, and supporting them in working together to go beyond what’s known, and be able to problem-solve, to innovate, to create new products, services, and solutions.   We provide resources, and communication channels, and meaning representation tools.

And that’s what we should be shooting for in our formal learning, too. Not an artificial event, but presented with meaningful activity, that learners get as important, with resources to support, and ideally, collaboration to help disambiguate and co-create understanding.   The task may be artificial, the resources structured for success, but there’s much less gap between what they do for learning and what they do in practice.

In both cases, the learning is facilitated. Don’t assume self-learning skills, but support both task-oriented behaviors, and the development of self-monitoring, self learning.

The goal is to remove the artificial divide between formal and informal, and recognize the continuum of developing skills from foundational abilities into new areas, developing learners from novices to experts in both domains, and in learning..

This is the perspective that drives the vision of moving the learning organization role from ‘training’ to learning facilitator. Across all organizational knowledge activities, you may still design and develop, but you nurture as much, or more.   So, nurture your understanding, and your learners.   The outcome should be better learning for all.

Facilitating Learning

4 September 2009 by Clark Leave a Comment

In last night’s #lrnchat on instructional design,there was some discussion of the term ‘learning facilitator’ versus ‘trainer’ (which now I can’t find!?!), and it got me wondering.   I’ve also been thinking about a set of talks I may be giving, and how to break them up.   There was also a discussion on ITFORUM that expanded to discuss how experts are losing the problem-solving skills and how to develop them.   It leads me to think about what is learning, and why we are arguing that the new role in the organization will be for learning facilitation, not for ‘instruction’ or ‘training’.

How do we learn?   Not how do we believe we should be instructed, but how do we learn? If we look at anthropology, empirical studies, psychology and more, the ideal learning happens when learners get why they’re learning, are working on meaningful tasks, have support around, are given time to reflect, and more.   Recourse to knowledge resources like tapes, videos, texts, etc, is driven by need, not pre-determined.   It happens best when the task has a level of ambiguity where learners collaborate to understand. There’s problem-solving, experimentation and evaluation, and more.

This happens naturally among communities of practice, and so for much of organizational learning, creating an environment where this can happen around organizational goals is really the ‘informal learning’ Jay Cross talked about in his book on the topic.   Whether you want to call the actual deliberate support of informal learning ‘non-formal’ or not (I’m not hot on the idea, but can see how it might help some folks get their mind around it).   However, I do strongly want to suggest that supporting informal learning in systematic ways is one of the highest value investments an organization can make in being nimble, agile, innovative, and consequently successful.

Then, we go back and look at situations where we have new folks, including folks moving to new areas (practitioners promoted to managers, where they’re new to management), new processes are introduced (whether in sales approach, new technology in a product, or new service), and people wanting to reskill.   This is more about execution, and is formal learning, where we need to support motivation and manage anxiety as well as develop new skills.   The point is, for the novice to practitioner transition, we need the formal treatment, whereas practitioner to expert transition is more informal, and even information can be instruction and sufficient.

When we have this formal situation, we often do the information, example, practice routine, that’s been shown to work.   However, newer pedagogies, where we put meaningful tasks up front, and organize the learning around it, making it structurally closer to the more natural learning model, is proving valuable.   Call it a social constructivist, or connectivist, or any other pedagogical framework.   What you do is carefully structure the task to be meaningful and obviously important to the learner, carefully control the challenge, and scaffold support   for the knowledge and resources.   This, really, is taking instructional design in a new direction, still requiring design, but using a new pedagogy that’s more learning facilitation than ‘training’. It may be that that’s not what folks think of as training, and ideally training is more learning facilitation, but I find the relabelling to help convey the necessary approach as ‘trainer’ can unfortunately be ‘spray and pray’ or ‘show up and throw up’, at least in practice.

Note that this facilitation needs to address something more, both formally and informally.   You have to develop the ability to learn in this way – the problem representation, information access, and experimentation skills – not take it for granted.   Not everyone is a good self- or group-learner, and yet you want them to get better at this for the informal learning to really be optimal.   Make those skills explicit, and scaffold that development as well!

No one said it’s easy, but it seems to me a more robust, important, and valuable contribution to make, a task to be proud of.   That’s why many of us are now suggesting that the learning role in an organization will move to facilitation from an information presentation and testing role.   Knowledge is not what’s going to be useful going forward, but skills in applying that knowledge.   So my suggestion is to start thinking about facilitating learning, and abandon a focus on knowledge development. That’s where I think instructional design has to go, and I think others are seeing and saying it too.   Are you?

Learning Experience Creation Systems

2 September 2009 by Clark 2 Comments

Where do the problems lie in getting good learning experiences? We need them, as it’s becoming increasingly important to get the important skills really nailed, not just ‘addressed’.   It’s not about dumping knowledge on someone, or the other myriad ways learning can be badly designed.   It’s about making learning experiences that really deliver.   So, where does the process of creating a learning experience go wrong?

There’s been a intriguing debate over at Aaron (@mrch0mp3rs) Silver’s blog about where the responsibility lies between clients and vendors for knowledge to ensure a productive relationship.   One of the issues raised (who, me?) is understanding design, but it’s clearly more than that, and the debate has raged.

Then, a post in ITFORUM asked about how to redo instructor training for a group where the instructors are SMEs, not trainers, and identified barriers around curriculum, time, etc.   What crystallized for me is that it’s not a particular flaw or issue, but it’s a system that can have multiple flaws or multiple points of breakdown.

LearningExperienceDesignSystemThe point is, we have to quit looking at it as design, development, etc; and view it not just as a process, but as a system. A system with lots of inputs, processes, and places to go wrong.   I tried to capture a stereotypical system in this picture, with lots of caveats: clients or vendors may be internal or external, there may be more than one talent, etc, it really is a simplified stereotype, with all the negative connotations that entails.

Note that there are many places for the system to break even in this simplified representation.   How do you get alignment between all the elements?   I think you need a meta-level, learning experience creation system design. That is, you need to look at the system with a view towards optimizing it as a system, not as a process.

I realize that’s one of the things I do (working with organizations to improve their templates, processes, content models, learning systems, etc), trying to tie these together into a working coherent whole. And while I’m talking formal learning here, by and large, I believe it holds true for performance support and informal learning environments as well, the whole performance ecosystem.   And that’s the way you’ve got to look at it, systemically, to see what needs to be augmented to be producing not content, not dry and dull learning, not well-produced but ineffective experiences, but the real deal: efficient, effective, and engaging learning experiences. Learning, done right, isn’t a ‘spray and pray’ situation, but a carefully designed intervention that facilitates learning.   And to get that design, you need to address the overall system that creates that experience.

The client has to ‘get’ that they need good learning outcomes, the vendor has to know what that means.   The designer/SME relationship has to ensure that the real outcomes emerge.   The designer has to understand what will achieve these outcomes.   The ‘talent’ (read graphic design, audio, video, etc) needs to align with the learning outcomes, and appropriate practices, the developer(s) need to use the right tools, and so on.   There are lots of ways it can go wrong, in lack of understanding, in mis-communication, in the wrong tools, etc.   Only by looking at it all holistically can you look at the flows, the inputs, the processes, and optimize forward while backtracking from flaws.

So, look at your system.   Diagnose it, remedy it, tune it, and turn it into a real learning experience creation system.   Face it, if you’re not creating a real solution, you’re really wasting your time (and money!).

Design ‘debt’ and quality process

10 August 2009 by Clark 8 Comments

A tweet from Joshua Kerievsky (@JoshuaKerievsky) led me to the concept of design debt in programming.   The idea is (quoting from Ward Cunningham):

Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite…. The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation, object-oriented or otherwise.

I started wondering what the equivalent in learning design would be. Obviously, software design isn’t the same as learning design, though learning design could stand to benefit from what software engineers know about process and quality.   For example, the Personal Software Process‘ focus on quality review and data-driven improvement could do wonders for improving individual and team learning design.

Similarly, refactoring to remove typical bad practices in programming could easily analogize to the reliable patterns we see in Broken ID.   There are mistakes reliably made, and yet we don’t identify them nor have processes to systematically remedy them.

What are the consequences of these mistakes?   It’s clear we often take shortcuts in our learning design, and let’s be honest, we seldom go back. For big projects, we might create iterative representations (outlines, finished storyboards), and ideally we tune them once developed, but seldom do we launch, and then reengineer based upon feedback, unless it’s heinous.   Heck, we scandalously seldom even measure the outcomes with more than smile sheets!

For software engineering, the debt accrues as you continue to patch the bad code, rather than fixing it properly (paying off the principal).   In learning design, the cost is in continuing to use the bad learning design.   You’ve minimized the effectiveness, and consequently wasted the money it cost and the time of the learners.   Another way we accrue debt is transfer learning designed for one mode, e.g. F2F delivery, and then re-implement it as elearning, synchronous or asynchronous.

In software engineering, you’re supposed to design your code in small, functional units with testable inputs and outputs, and there might be different ways of accomplishing it inside, but the important component are the testable results.   Our learning equivalent would be how we address learning objectives, and of course first we have to get the objectives right, and how they build to achieve the necessary outcome, but then it shifts to getting the proper approach to meeting objectives. If we focus on the latter, it’s clear we can think about refactoring to improve the design of each component.

Frankly, our focus on process is still too much on a waterfall model that’s been debunked as an approach elsewhere.   We don’t have quality controls in a meaningful way, and we don’t check to see what reliable mistakes we’re making.   Maybe we need a quality process for design. I see standards, but I don’t see review.   We have better and better processes (e.g. Merrill’s Ripple in a Pond), but still not seeing how we bake review and quality process into it.   Seems to me we’ve still a ways to go.

Complicit Clients

6 August 2009 by Clark 5 Comments

I regularly rail against cookie-cutter learning design, boring elearning, etc.   I like to blame it on designers who don’t know the depths of learning behind the elements of design, and perhaps also on managers who don’t work to ensure that the learning objectives are tied closely to meaningful business outcome.   And I think that’s true, but of course there’s another culprit as well: clients who just ask for the same old thing!

I regularly work with a couple of partners who use me when there’s a need to go to the ‘next level’, whether it’s to mobile, pushing the engagement envelope, or working more strategically (that’s one of the way I help clients, too).   However, too often they’re just asked to turn content into courses, and the clients don’t care that the learning objectives in that content are too low-level, too knowledge-focused, completely abstract or de-contextualized, and generally not meaningful.   Now, my partners generally push back a bit, trying to help the client realize the value of a deeper design, but many times the client doesn’t want to put any more money in, doesn’t want to think about it, they just want that course up with a quiz (even with a pre-test!, *shudder*).   And my partners will go along, because creating elearning is their business and they can’t just turn away work.

And I’ve heard that from in-h0use departments as well.   As one of the attendees at my strategic elearning workshop a couple of months ago said, the managers from other business units say “just do that stuff you do” and don’t want any deeper thought into it.   They want it fast, based upon the content, and apparently don’t care that it isn’t going to lead to any meaningful change.   Or don’t know the difference. Hey, they learned that way, so it must be OK, right?

However, I think we owe it to the learners, to those clients, and to ourselves to start educating those clients, internal or external, about good learning.   You’ve got to know it yourself first, of course, but once you’re doing it anyway, there’s really no extra overhead at the first level.   But you want to start pushing back: “what’s the behavior that needs to change/”, or “what decisions do they need to be able to make that they can’t make correctly now?”   And, we need to ask “how will you know that it’s changed? What are the metrics that you’re trying to impact?”   Once you’ve got them thinking about measurable change, you have the opportunity to start talking about meaningful impact and good design to achieve outcomes.

Frankly, you can’t complain about relevance to the organization if you’re not fighting to achieve better outcomes, ones that matter.   So, educate yourselves, improve your processes, and then fight to be doing more meaningful stuff.   Hey, we’re supposed to be about learning, and marketing our services is really about good customer education! Get them educated, and get to be doing more meaningful and consequently rewarding design.

Virtual Worlds #lrnchat

31 July 2009 by Clark 4 Comments

In last night’s #lrnchat, the topic was virtual worlds (VWs).   This was largely because several of the organizers had recently attended one or another of the SRI/ADL meetings on the topic, but also because one of the organizers (@KoreenOlbrish) is majorly active in the business of virtual worlds for learning through her company Tandem Learning.   It was a lively session, as always.

The first question to be addressed was whether virtual worlds had been over or underhyped.   The question isn’t one or the other, of course.   Some felt underhyped, as there’s great potential. Others thought they’d been overhyped, as there’s lots of noise, but few real examples.   Both are true, of course.   Everyone pretty much derided the presentation of powerpoints in Second Life, however (and rightly so!).

The second question explored when and where virtual worlds make sense.   Others echoed my prevailing view that VW’s are best for inherently 3D and social environments.   Some interesting nuances came in exploring the thought that that 3D doesn’t have to be our scale, but we can do micro or macro 3D explorations as well, and not just distance, but also time. Imagine exploring a slowed down, expanded version of a chemical reaction with an expert chemist!   Another good idea was for contextualized role plays.   Have to agree with that one.

Barriers were explored, and of course value propositions and technical issues ruled the day. Making the case is one problem (a Forrester report was cited that says enterprises do not yet get VWs), and the technical (and cognitive) overhead is another.   I wasn’t the only one who mentioned standards.

Another interesting challenge was the lack of experience in designing learning in such environments.   It’s still new days, I’ll suggest, and a lot of what’s being done is reproductions of other activities in the new environment (the classic problem: initial uses of new technology mirror old technology).   I suggested that we’ve principles (what good learning is and what VW affordances are) that should guide us to new applications without having to have that ‘reproduction’ stage.

I should note that having principles does not preclude new opportunities coming from experimentation, and I laud such initiatives.   I’ve opined before that it’s an extension of the principles from Engaging Learning combined with social learning, both areas I’ve experience in, so I’m hoping to find a chance to really get into it, too.

The third question explored what lessons can be learned from social media to enhance appropriate adoption of VWs.   Comments included that they needed to be more accessible and reliable, that they’ll take nurturing, and that they’ll have to be affordable.

As always, the lrnchat was lively, fun, and informative.   If you haven’t tried one, I encourage to at least take it for a trial run. It’s not for everyone, but some admitted to it being an addiction! ;)   You can find out more at the #lrnchat site.

For those who are interested in more about VWs, I want to mention that there will be a virtual world event here in Northern California September 23-24, the 3D Training, Learning, & Collaboration conference.   In addition to Koreen, people like Eilif Trondsen, & Tony O’Driscoll (who has a forthcoming book with Karl Kapp on VW learning) will be speaking,   and companies like IBM and ThinkBalm are represented, so it should be a good thing. I hope to go (and pointing to it may make that happen, full disclaimer :).   If you go, let me know!

Making designing good learning easier

30 July 2009 by Clark Leave a Comment

On my last post, I got a comment that really made me think.   The problem was content coming as PPTs from SMEs, and the question was poignant: “Given limited time and resources on a project how can you plan in advance to ensure that your learning is engaging and creates effective outcomes?”   I commented a reply, but I’d like to elaborate on that.

I like the focus on the ‘planning’ part: what can you do up front to increase the quality of your learning outcomes?   It’s a recursive design problem: people need to be able to design better, what training, job aids, tools, and/or social learning can we develop to make this work?   Having just done this on a project where a team I was a member of   were responsible for generating a whole curriculum around the domain, I can speak with some confidence about how to make this work.

First, are the tools.   Too often, the templates enforce rigor around having the elements, rather than about what makes those elements really work.   So, on the project, I not only guided the design of the templates, but the definitions associated with the elements that helped ensure they accomplished the necessary learning activities.   For example, it’s no good to have an introduction that doesn’t activate the relevant prior experience and knowledge, doesn’t help the learner comprehend why this learning is important, or even accomplishes this in an aversive way (can you say: “pre-test“?   :).   This is the performance support component, that helps make it easy to do things well and more difficult to do the wrong thing.   Similarly with ensuring meaningful activity in the first place, etc.

Next is the understanding.   This comes both by creating a shared understanding in the team, and then refining the process, making the outcome a ‘habit’.   First, I’d worked with some of the team before, so they shared my design principles, then I presented and co-developed with the client that understanding.   Then, as first draft content came out, I’d critique it and used that to tune the template, and the understanding amongst the content developers.

The involvement in refining the design process took some time, but really paid off as the quality of the resulting output took a steep increase and then stabilized as good quality learning experience yet reproducible in a cost-effective way and sustainable and manageable way.

As I’ve mentioned before, the nuances between bad elearning and really effective and engaging content are subtle to the untrained eye, but the outcomes are not, both subjectively from the learner’s experience, and objectively from the outcomes.   You should be collecting both those metrics, and reviewing the outcomes, as they both provide useful information about how your design is working (or not) and how to improve it.

If it matters, and it should, you really should be reviewing and tuning your processes to achieve engagement and learning outcomes.   It’s not more expensive, in the long term, though it does take more work.   But otherwise, it’s just a waste of money and that is expensive!   You’ll end up in the situation Charles Jenning’s cites, when”you might as well throw the money spent on these activities out the window.”   Don’t waste money, spend the time assuring that your learning design processes achieve what they need to.   Your organization, and your learners, will thank you.

Creating Stellar Learning

28 July 2009 by Clark 5 Comments

Getting the details right about instructional design is quite hard, or at least it appears that way, judging from how many bad examples there are.   Yet the failures are more from a lack of knowledge rather than inherent complexity.   While there are some depths to the underlying principles that aren’t sufficiently known, they can be learned.   However, a second level of embedding systematic creativity into the process is another component that’s also missed, however this time it’s from a broken process more than a lack of knowledge.

What we want are learning solutions that really shine: where the learning experience is engaging, efficient, and effective.   Whether you’re creating products for commercial sale, or solutions for internal or external partners, you want to take your learning experience design to the next level.   So, how does an organization improve their learning design process to create stellar learning?

Let’s go through this, step by step.   First, you’ve got to know what you should be doing. I’ve gone on before about what’s broken in learning design, and what needs to be done.   That can be learned, developed, practiced, and refined.   Ideally, you’d have a team with a shared understanding of what really good learning is composed of and looks like. But it’s not just the deep learning.

There’s more: the team needs to develop both the understanding of the learning principles, and a creative approach that encourages striking a balance between pragmatic constraints and a compelling experience.   Note that creating a compelling experience isn’t about wildly expensive productive values, but instead about ensuring meaningfulness, both of the content, and the context (read: examples and practice). The learners have to be engaged cognitively and emotionally, challenged to work through and apply the material, to really develop the skills. If not, why bother?   Again, it’s not about expensive media; it can be done in text, for crying out loud! (Not that I’m advocating that, but just to emphasize it’s about design, not media.)

I find that it’s not that designer’s aren’t creative, however, but that there’s just no tolerance in the system for taking that creative step.   Yes, it can be hard to break out of old approaches, but there has to be an appreciation for the value of creating engaging experiences.   I will admit that initially the process may take a bit longer, but with practice the design doesn’t take longer, yet the results are far better.   It does, however, take a shared understanding of what an engaging experience is just as it takes the understanding of the nuances of creating meaningful learning.

And that level of understanding about both deep learning and creative experience design can be developed as a shared understanding among your team in very pragmatic ways (applying those principles to the design of that learning, too).     It’s just not conscionable anymore to be doing just mediocre design.   It won’t lead to learning and is a waste of money, as well as a waste of learner’s time.

That covers the design, and even a bit of the process, but what’s needed is a look at your design tools and processes. And I’m not talking about whether you use Flash or not, what I’m talking about is your templates.   They can, and should, be structured to support the design I’m talking about.   Too often, the constraints in existence stifle the very depth and creativity needed, saddling them with unnecessary components and not requiring the appropriate ones.   Factors that can be improved include templates for design, tools for creation, and even underlying content models!   They all have to strike the balance between supportive structure and lack of confinement.

Look, I’ve worked numerous times on projects where I’ve helped teams understand the principles, refine their processes, and yielded far better outcomes than you usually get.   It’s doable!   Yes, it takes some time and work, but the outcome is far better. On the flip side, I’ve reliably gone through and eviscerated mediocre design, systematically.   The point is not to make others look bad, but instead to point out where and how to improve product.   Those flaws from the teams that developed it can be remedied.   Teams can learn good design.   My goal, after all, is better learning!

A caveat: to the untrained eye, the nuances are subtle.   That’s why it’s easy to slide by mediocre design that looks good to the undiscerning stakeholder.   Stellar design doesn’t seem that much better, until you ascertain the learner’s subjective experience, and look at the outcomes as well.   In fact, I recall one situation where there was a complaint from a manager about why the outcome didn’t look that different.   I walked that manager through the design, and the complaints changed to accolades.

You should do it because it’s the right thing to do, but you can justify it as well (and when you do walk folks through the nuances, they’ll learn that you really do know what you’re talking about).   There’s just no excuse for any more bad learning, so please, please, let’s start creating good learning experiences.

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok