Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Better Benchmarking

20 March 2019 by Clark Leave a Comment

I was on a phone conversation and was asked whether I compared my clients against others in the business to help them figure out where they’re at. E.g. do I offer my partners the chance to benchmark. And after a bit of thought, I said that no, I didn’t, and explained why. Moreover, they found my answer intriguing, so I thought I’d share it with you.

So, as I’ve said before, I don’t like best practices. In fact, as I’ve written before, I think we shouldn’t benchmark ourselves against others. That’s a bad practice. Why? Because then we’re comparing ourselves against a relative measure. And I think we should be comparing ourselves to a principled metric about where we could and should be.

Principles and Approaches for L&DIn fact, in the Revolution book, I created such a benchmark.  Using my performance ecosystem model, I took the six fundamental elements and elaborated them. The first core element I documented is a learning  culture. That’s accompanied by the approach to formal learning, looking at your instructional design and delivery. Then you move to performance focus, how you’re supporting performance in the world. We move on to social, how you’re facilitating informal learning and innovation. The next step is how you measure what you’re doing. Finally, there’s your infrastructure, how you’re creating the ecosystem environment. For each here I have a principle and an approach.

What I’ve done in the benchmarking instrument is take each of these and extend them. So, for each element broke it into two components, as there are nuances. And, for each, I proposed four levels of maturity:

  • Unaware: here you’re not thinking ecosystem
  • Initiating: now you’re beginning to establish an ecosystem approach
  • Mature: you’ve reached a working approach
  • Leading: at this level you’re setting the pace, thinking ahead

Thus, rather than benchmarking yourself against others, you have a principled approach with which to measure yourself. This instrument is fully elaborated in the Revolutionize L&D book, and goes into detail on each of the twelve rows.

And that was my response to the query. As a person, on principle you’re not supposed to compare yourself to others, but to your own progress. How to set your benchmarks? Against formal criteria. The same is true for organizations. I’ve tried to make a scrutable framework, the Revolution Field Guide, so to speak. So, please, look to best principles, not practices, and evaluate yourself similarly.

 

Chasing Technology Good and Bad

19 March 2019 by Clark Leave a Comment

I’ve been complaining, as part of the myths tour, that everyone wants the magic bullet. But, as I was commenting to someone, there are huge tech opportunities we’re missing. How can I have it both ways?  Well, I’m talking about two different techs (or, rather, many).  The fact is, we’re chasing the wrong technologies.

The problem with the technologies we’re chasing is that we’re chasing them from the wrong beginning. I see people chasing microlearning, adaptive learning, video, sims, and more as  the answer. And of course that’s wrong. There  can’t be one all-singing all-dancing solution, because the nature of learning is remarkably diverse. Sometimes we need reminders, sometimes deep practice, some times individualization makes sense, and other times it’s not ideal.

The part that’s really wrong here is that they’re doing this  on top of bad design!  And, as I believe I’ve mentioned, gilded bad design is still bad design.  Moreover,  if people actually spent the time and money first on investing just in improving their learning design, they’d get a far better return on investment than chasing the latest shiny object.  AND, later investments in most anything would be better poised to actually be worthwhile.

That would seem to suggest that there’s not a sensible tech to chase. After, of course, authoring tools and creating elearning. And that’s not true. Investment in, say, sims makes sense if you’re using it to implement good design (e.g. deep practice).  As part of a good learning design  strategy.  But there’s something deeper I’m talking about. And I’ve talked about it before.

What I’m talking about are content systems. They may seem far down the pike, but let me (again) make the case about why they make sense now, and for the future. The thing is, being systematic about content has both short-term  and  long-term benefits. And you can use the short-term ones to justify the long-term ones (or vice-versa).

In the short term, thinking about content from a systems perspective offers you rigor. While that may seem off-putting, it’s actually a benefit.  If you design your content model around good learning design, you are moving towards the first step, above, about good design. And, if you write good descriptions within those elements, you  really provide a foundation that makes it difficult to do bad design.

My point is that we’re ignoring meaningful moves to chase chimera. There are real value steps to make, including formalizing design processes  and  tools about good design. And there are ways to throw your money away on the latest fad.  It’s your choice, but I hope I’ve made a case for one interpretation. So, what’s yours?

Curriculum or pedagogy?

12 March 2019 by Clark Leave a Comment

In a conversation today, I mentioned that previously I’ve thought that perhaps the best next ‘man in the moon’ project would be to put an entire K12 curriculum up online. And, I’ve also thought that the only way to really fix things is to train trainers of teachers to learn to facilitate learning  around meaningful activity. And, of course, both are needed. What am I thinking?

So, there are huge gaps in the ways in which folks have access to learning. For example, I worked on a project that was trying to develop some K12 curricula online, to provide support for learners in HS that might not have sufficiently capable learners. The project had started with advanced learners, but recognized that wasn’t the only gap. And this is in California!  So I have argued for a massive project, but using advanced curricula and pedagogy.

And, at the other end, as I spoke at a conference looking to talk about improving education in India. There, they have a much bigger need for good teachers than they can reach with their education schools. I was arguing for a viral teacher prep. The idea being not just to train teachers, but train the trainers of those teachers. Then the training could go viral, as just teaching teachers wouldn’t go fast enough.

And both are right, and not enough. In the conversation, I resurrected both points and am now reflecting how they interact. The simple fact is that we need a better curriculum and a better pedagogy. As Roger Schank rightly points out, things like the quadratic equation are nuts to keep in a K12 curricula. The fact is that our curricula came from before the  Industrial Age and is barely adequate there. Yet we’re in an Information Age. And our pedagogy is aligned to tests, not to learning nor doing. We should be equipping kids with actionable knowledge to make meaningful decisions in their lives, not with arbitrary and abstract knowledge that isn’t likely to transfer.

And, of course, even if we did have such a curriculum online, we’d need teachers who could facilitate learning in this way. And that’s a barrier not just in India. The point being that most of the world is suffering with bad curricula and pedagogy. How do we make this change.

And I don’t have an answer. I think we should put both online, and support on the ground. We need that content, available through mobile to reach beyond the developed world, and we need the facilitators. They can be online, as I think about it, but they need to understand the context on the ground if they’re not there. They are context-specific necessities. And this is a massive problem.

Principle says: start small and scale. There are institutions doing at least parts of this, but scaling is a barrier. And again, I have no immediate solution other than a national (or international) initiative. We don’t want just one without the other. I don’t want teachers facilitating the old failed curricula, and I don’t want current pedagogies working on the new curricula. (And I shudder at the thought of a pre-college test in the old style trying to assess this new model!) I welcome your thoughts!

Thoughts on strategy from Training 19

6 March 2019 by Clark Leave a Comment

So last week I was the strategy track coach for the Training 19 conference. An experiment! That meant that I picked the sessions from a list of those who put their session proposals up for ‘strategy’, and could choose to open and/or close the track. I chose both. And there were thoughts on strategy from the sessions and the attendees that are worth sharing.

I chose the sessions mainly on two criteria: coverage of the topics, and sessions that sounded like they’d give real value.  I was lucky, the latter happened! While I didn’t get the complete coverage I wanted, I  did get a good spread of topics. So I think the track worked. As to the coaching, there wasn’t much of that, but I’ve sent in suggestions for whoever does it next year.

I knew two of the presenters, and some were new. My goal, again, was real coverage. And they lived up to it. Friend and colleague Michael Allen practiced what he preached while talking about good learning design, as he does. He was followed by Karen Polhemus &amp Stephanie Gosteli who told a compelling tale of how they were managing a  huge initiative by combining training with change management. Next was JD Dillon, another friend, talked about his experiences building learning ecosystems that deemphasized courses based upon data and his inferences. Alwyn Klein made an enthusiastic and compelling case for doing performance consulting  before you start.  Haley Harris & Beth Wisch went deep about data in talking about how they met the needs for content by curating.  Joe Totherow talked games as a powerful learning tool. Finally, Alex Kinnebrew pushed for finding stakeholder voices as a complement to data in making strategy.

Performance EcosystemI bookended these talks. I opened by making the case for doing optimal execution right, meaning doing proper learning design and performance support. Then I talked about driving for continual innovation with social and informal.  I closed by laying out the performance ecosystem diagram (ok, so I replaced ‘elearning’ in the diagram with ‘training’, and that’s probably something I keep), and placed the coming talks on it, so that attendees would know where the talks fit. I mostly got it right ;).  However, the feedback suggested that for those who complained, it’s because  I took too long to get to the overview. Useful feedback.  

I finished with a 3 hour strategy session where I walked people through each element of the ecosystem (as I cut it), giving them examples, providing self-assessment, and items to add to their strategy for that element. I closed by suggesting that it was up to them to sequence, based upon their particular context. Apparently, people  really liked this opportunity. One challenge was the short amount of time; this is usually run as a full day workshop.

It’s clear that folks are moving to thinking ‘outside of the box’, and I’m thrilled. There were good audiences for the talks in a conference focused on doing training! It’s definitely time for thoughts on strategy. Perhaps, as has happened before, I was ahead of the time for the revolution. Here’s to a growing trend!

Fish & Chips Economics

13 February 2019 by Clark Leave a Comment

A colleague, after hearing my take on economics, suggested I should tell this story. It’s a bit light-hearted, but it does make a point. And I’ve expanded it here for the purposes of reading versus listening. You can use other services or products, but I’ve used fish and chips because it’s quite viscerally obvious.

Good fish and chips are a delight.  When done well, they’re crispy, light, and not soggy. Texturally, the crunch of the batter complements the flakiness of the fish as the crunchier exterior of the chips (fries, for us Yanks) complements the softness of the potato inside. Flavorwise it’s similarly a win, the battered fish a culinary combination of a lightly savory batter against the simple perfection of the fish, and the chips provide a smooth complement. Even colorwise, the light gold of the chips set against the richer gold of the fish makes an appealing platter.  It’s a favorite from England to the Antipodes.

And we know how to do it.  We know that having the proper temperature, and a balanced batter, and the right sized fries, are key to the perfection. There is variation, the thickness of the fries or the components of the batter, but we know the  parameters.  We can do this reliably and repeatably.

So why, of all things, do we still have shops that sell greasy, sodden fish and chips? You know they’re out there. Certainly consumers should avoid such places and only patronize purveyors who are able to replicate a recipe that’s widely known.  Yet, it is unfortunately all too easy to wander from town to town, from suburb to suburb, and find a surprising variation.  This just doesn’t make sense!

And that’s an important “doesn’t make sense”.  Because, economics tells us that competition will drive a continuing increase in the quality of products and services. Consumers will seek out the optimal product, and those who can’t compete will fall away. Yet these variations have existed for decades!  “Ladies & gentlemen, we have a conundrum!”

The result? The fundamental foundation of our economy is broken.  (And, of course, I’m using a wee bit of exaggeration  for humor.) However, I’m also making a point: we need to be careful about the base statements we hear.

The fact of the matter is that consumers  aren’t optimizing, they’re ‘satisficing’. That is, consumers will choose ‘satisfactory’ solutions rather than optimal. It’s a tradeoff: go a mile or two further for good fish and chips, or just go around the corner for the less desirable version. Hey, we’re tired at the end of a long day, or the kids are on a rampage, or…  This, in the organizational sense, was the basis of Herb Simon’s Nobel Prize in Economics, before he went on to be a leader of the cognitive science revolution.

The underlying point, besides making an affectionate dig at our economic model, is that the details matter. A joke is that economics predictions have no real basis in science, but then important assumptions are made regardless. This isn’t a political rant, in any case, it’s more a point about the fundamentals of society, and how we evaluate them.  As requested.

What’s the CEO want to see

6 February 2019 by Clark Leave a Comment

This issue came up recently, and it’s worth a think. What would a CEO  hope to see from L&D?  And I think this is in two major areas. The first is for optimum execution, and the other is for continual innovation. It’s easier to talk about the former, so we’ll start there. However,  if (or, rather, when ;) L&D starts executing on the other half, we should be looking for tangible outcomes there too.

Optimal Execution

To start with, we need specific responses for the things an organization knows they need to do (until that can be automated? Orgs must do what matters, and address any gaps.  Should our glorious leader care about us doing what we’re supposed to? No. Instead, this individual is concerned with gaps that have emerged and that they’re fixed. Of course, we have to admit problems we’re having as well.

The CEO shouldn’t have to care how efficient we are. That’s a given!  Sure, when requested, we must be able to demonstrate that our costs were covered by the benefits of the change. But the fact that we’re no more expensive than anyone else per seat per hour is just assumed!  If we’re asked, we should be able to show that, and it can be in a written report. However, mentioning efficiency in the C-suite is a ticket out.

What a CEO (should) care about are any performance gaps that have arisen in previous meetings and the changes that L&D has achieved.  You know, “we’ve been able to decrease those troubling call handling times back down to x.5 minutes” or “we identified the problem and were able to reduce the errors in manufacturing back to our y/100 baseline” These may even include “saving us $z”.

To do this, of course, means you’re actually addressing key business impact drivers. You need to be talking to the business units, using their measures and performance consulting to find and fix the problems. It’s not “ok, we’ll get you a course on this”, it’s “sure, we can do that course, and tell me what the outcome should be, how will we know it worked?”

Yes, particularly at the beginning when you’re establishing credibility, you may be asked for ROI. How much did it cost to fix this. You do want the fix to cost less than the problem. But that won’t be the main criteria. The CEO should be focusing on strategy, and fixing problems that prevent being able to execute on those directions.

Continual Innovation

That strategy, of course, comes from new ideas. And, to be fair, so too due the fixes to problems. That’s the learning that occurs  outside the course!  Research, innovation, design, trouble-shooting, all these start with a question and ultimately an answer will be learned. It comes from experimentation and reflection, as well as looking at what others’ have done (inside and out).

What are the. measures here?  Well, if we take the result that innovation comes from collective constructive friction instead of the individual brainstorm, then meaningful social media activity would be one indicator. Increasing either the quantity of quality discussions would be one.  Just ‘activity’ in the social systems has been one initial measure. But we can go further.

We should expect the impact of these activities to impact particular outcomes. If it’s in sales, we should see more proposals generated, higher success rates, lower closing times, lower closing costs, and other such metrics. In operations, we might see fewer errors, more experiments, more new product ideas generated.  And so on. E.g. “we increased the percentage of…” The point is that if people are sharing lessons learned, we should see faster learning and higher success rates and/or greater innovations.

Of course, we have to count these. Whatever method, whether xAPI or proprietary, we should be tracking activity and correlating with business metrics. With a little thought, we can be looking for and leveraging interesting relationships between what people do in learning (and performing) and what the outcomes are.

We could also be reporting out on the outputs of sessions that L&D facilitates. At least, initially, and then the overall increase in innovation metrics would be appropriate. The key role of L&D in innovation is developing capabilities around best principles, and that includes facilitating and developing facilitative skills.

Impact

The take-home is that the CEO  shouldn’t want to hear our internal metrics on effectiveness and efficiency. Don’t expect that person to know about learning theory, best approaches, nor L&D benchmarks. They want, and need, organizational impact.  Even the number of or percentage of employees who’ve taken L&D services isn’t enough. What has that  done?  Report impact on the organization. Let the CEO know how much you’ve helped the key metrics, which are directly tied to the bottom line. Yes, you have to start working with business partners. Yes, it requires breaking some molds. But ultimately, L&D will live, or die, by whether they’re accountable to, and contributing to, organizational success in demonstrable ways.

Learning from Experimentation

5 February 2019 by Clark 3 Comments

At the recent LearnTec conference, I was on a panel with my ITA colleagues, Jane Hart, Harold Jarche, and Charles Jennings. We were talking about how to lift the game of Modern Workplace Learning, and each had staked out a position, from human performance consulting to social/informal. Mine (of course :) was at the far end, innovation.  Jane talked about how you had to walk the walk: working out loud, personal learning, coaching, etc.  It triggered a thought for me about innovating, and that meant experimentation. And it also occurred to me that it led to learning as well, and drove you to find new content. Of course I diagrammed the relationship in a quick sketch. I’ve re-rendered it here to talk about how learning from experimentation is also a critical component of workplace learning.

Increasing experimentation and even more learnings based upon contentThe starting point is experimentation.  I put in ‘now’, because that’s of course when you start. Experimentation means deciding to try new things, but not just  any things.  They should be things that would have a likelihood of improving outcomes if they work. The goal is ‘smart’ experiments, ones that are appropriate for the audience, build upon existing work, and are buttressed by principle. They may or may not be things that have worked elsewhere, but if so, they should have good outcomes (or, more unlikely, didn’t but have a environmentally-sound reason to work for you).

Failure  has to be ok.  Some experiments should not work. In fact, a failure rate above zero is important, perhaps as much as 60%!  If you can’t fail, you’re not really experimenting, and the psychological safety isn’t there along with the accountability.  You learn from failures as well as from successes, so it’s important to expect them. In fact, celebrate the lesson learned, regardless of success!

The reflections from this experimentation take some thought as well. You should have designed the experiments to answer a question, and the experimental design should have been appropriate (an A-B study, or comparing to baseline, or…).  Thus, the lesson extracted from learning from experimentation is quickly discerned. You also need to have time to extract the lesson! The learnings here move the organization forward. Experimentation is the bedrock of a learning organization,  if you consolidate the learnings. One of the key elements of Jane’s point, and others, was that you need to develop this practice of experimentation for your team. Then, when understood and underway, you can start expanding. First with willing (ideally, eager) partners, and then more broadly.

Not wanting to minimize, nor overly emphasize, the role of ‘content’, I put it in as well. The point is that in doing the experimentation, you’re likely to be driven to do some research. It could be papers, articles, blog posts, videos, podcasts, webinars, what have you. Your circumstances and interests and… who knows, maybe even courses!  It includes social interactions as well. The point is that it’s part of the learning.

What’s  not in the diagram, but is important, is sharing the learnings. First, of course, is sharing within the organization. You may have a community of practice or a mailing list that is appropriate.  That builds the culture. After that, there’s beyond the org.  If they’re proprietary, naturally you can’t. However, consider sharing an anonymized version in a local chapter meeting and/or if it’s significant enough or you get good enough feedback, go out to the field. Present at a conference, for instance!

Experimentation is critical to innovation. And innovation takes a learning organization. This includes a culture where mistakes are expected, there’s time for reflection, practices for experimentation are developed, and more.  Yet the benefits to create an agile organization are essential.  Experimentation needs to be part of your toolkit.  So get to it!

 

Skating to where L&D needs to be

30 January 2019 by Clark 3 Comments

“I skate to where the puck is going to be, not where it has been.” – Wayne Gretsky

This quote, over-used to the point of being a cliché, is still relevant. I was just reading Simon Terry’s amusing and insightful  post on ‘best practices’ (against them, of course), and it reminded me of this phrase. He said “Best practices are often racing to where someone used to be”, and that’s critical. And I’ve argued against best practices, and I want to go further.

So he’s right that when we’re invoking best practices, we’re taking what someone’s already done, and trying to emulate it. He argues that they’ve already probably iterated in making it work,  in their org. Also, that by the time you do, they’ve moved on. They may even have abandoned it!  Which isn’t, directly, my complaint.

My argument against best practices is that they worked for them, but their situation’s different. The practice may be antithetical to your culture. And thinking that you can just graft it on is broken. Which is kind of Simon’s point to.    And he’s right that if you do get it working, you find that the time it hass taken means it’s already out of date.

So my suggestion has been to look to best principles:  why  did it work?  Abstract out the underlying principle, and figure out how (or even whether) to instantiate that in your own organization.  You’d want to identify a gap in your way of working, search through possible principles, identify one that matches, and work to implement it.  That makes more sense.  And, of course, it should be a fix that even if it takes time, will be meaningful.

But now I want to go further. I argue for comprehending the affordances of new technology to leapfrog the stage of replicating what was done in the old. Here I’m making a similar sort of argument. What I want orgs to do is to define an optimal situation, and then work to that! Yes, I know it sounds like a fairytale, but I think it’s a defensible approach. Of course, your path there will differ from another’s (there’s no free lunch :), but if you can identify what a good state for your org would be, you can move to it. It involves incorporating many relevant principles in a coherent whole. Then you can strategize the path there from your current content.

The point is to figure out what the  right future is, and skate there, not back-filling the problems you currently have. Beyond individual principles to a coherent whole. Proactive instead of reactive. That seems to make sense to me. Of course, I realize the other old cliché, “when. you’re up to your ass in alligators”, but maybe it’s time to change the game a bit more fundamentally. Maybe you shouldn’t be in the swamp anyway?  I welcome your thoughts!

 

What to evaluate?

22 January 2019 by Clark 4 Comments

In a couple of articles, the notion that we should be measuring our impact on the business is called out. And being one who says just that, I feel obligated to respond.  So let’s get clear on what I’m saying and why.  It’s about what to evaluate, why, and possibly when.

So, in the original article, by my colleague Will Thalheimer, he calls the claim that we should focus on business impact ‘dangerous’!  To be fair (I know Will, and we had a comment exchange), he’s saying that there are important metrics we should be paying attention to about what we do and how we do it. And no argument!  Of course we have to be professional in what we do.  The claim isn’t that the business measure is  all we need to pay attention to. And he acknowledges that later. Further, he does say we need to avoid what he calls ‘vanity metrics’, just how efficient we are. And I think we  do need to look at efficiency, but only after we know we’re doing something worthwhile.

The second article is a bit more off kilter. It seems to ignore the value of business metrics all together. It talks about competencies and audience, but not impacting the business. Again, the author raises the importance of being professional, but still seems to be in the ‘if we do good design, it is good’, without seeming to even check to see if the design is addressing something real.

Why does this matter?  Partly because, empirically, what the profession measures are what Will called ‘vanity’ measures. I put it another way: they’re efficiency metrics. How much per seat per hour? How many people are served per L&D employee?  And what do we compare these to?  Industry benchmarks. And I’m not saying these aren’t important, ultimately. Yes, we should be frugal with our resources. We even should ultimately ensure that the cost to improve isn’t more than the problem costs!  But…

The big problem is that we’ve no idea if that butt in that seat for that hour is doing any good for the org.  We don’t know if the competency is a gap that means the org isn’t succeeding!  I’m saying we need to focus on the business imperatives because we  aren’t!

And then, yes, let’s focus on whether our learning interventions are good. Do we have the best practice, the least amount of content and it’s good, etc. Then we can ask if we’re efficient. But if we only measure efficiency, we end up taking PDFs and PPTs and throwing them up on the screen. If we’re lucky, with a quiz. And this is  not going to have an impact.

So I’m advocating the focus on business metrics because that’s part of a performance consulting process to create meaningful impacts. Not in lieu of the stuff Will and the other author are advocating, but in addition. It’s all too easy to worry about good design, and miss that there’s no meaningful impact.

Our business partners will not be impressed if we’re designing efficient, and even effective learning, if it isn’t doing  anything.  Our solutions need to be  targeted at a real problem and address it. That’s why I’ll continue to say things like “As a discipline, we must look at the metrics that really matter… not to us but to the business we serve.”  Then we also need to be professional. Will’s right that we don’t do enough to assure our effectiveness, and only focus on efficiency. But it takes it all, impact + effectiveness + efficiency, and I think it’s dangerous to say otherwise.  So what say you?

Redesigning Learning Design

16 January 2019 by Clark 2 Comments

Of late, a lot of my work has been designing learning design. Helping orgs transition their existing design processes to ones that will actually have an impact. That is, someone’s got a learning design process, but they want to improve it. One idea, of course, is to replace it with some validated design process. Another approach, much less disruptive, is to find opportunities to fine tune the design. The idea is to find the minimal set of changes that will yield the maximal benefit. So what are the likely inflection points?  Where am I finding those spots for redesigning?  It’s about good learning.

Starting at the top, one place where organizations go wrong right off the bat is the initial analysis for a course. There’s the ‘give us a course on this’, but even if there’s a decent analysis the process can go awry. Side-stepping the big issue of performance consulting (do a reality check: is this truly a case for a course), we get into working to create the objectives. It’s about how you work with SMEs. Understanding what they can,  and can’t, do well means you have the opportunity to ensure that you get the right objectives to design to.

From there, the most meaningful and valuable step is to focus on the practice. What are you having learners  do, and how can you change that?  Helping your designers switch to good  assessment writing is going to be useful. It’s nuanced, so the questions don’t  seem that different from typical ones, but they’re much more focused for success.

Of course, to support good application of the content to develop abilities, you need the right content!  Again, getting designers to understand what the nuances of useful examples from just stories isn’t hard but rarely done. Similarly knowing why you want  models and not just presentations about the concept isn’t fully realized.

Of course, making it an emotionally compelling experience has learning impact as well. Yet too often we see the elements just juxtaposed instead of integrated. There  are systematic ways to align the engagement and the learning, but they’re not understood.

A final note is knowing when to have someone work alone, and when some collaboration will help.  It’s not a lot, but unless it happens at the right time (or happens at all) can have a valuable contribution to the quality of the outcome.

I’ve provided many resources about better learning design, from my 7 step program white paper  to  my deeper elearning series for Learnnovators.  And I’ve a white paper about redesigning as well. And, of course, if you’re interested in doing this organizationally, I’d welcome hearing from you!

One other resource will be my upcoming workshop at the Learning Solutions conference on March 25 in Orlando, where we’ll spend a day working on learning experience design, integrating engagement and learning science.  Of course, you’ll be responsible for taking the learnings back to your learning process, but you’ll have the ammunition for redesigning.  I’d welcome seeing you there!

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.