Learnlets

Secondary

Clark Quinn’s Learnings about Learning

xAPI conceptualized

1 March 2016 by Clark 6 Comments

A couple of weeks ago, I had the pleasure of attending the xAPI Base Camp, to present on content strategy. While I was there, I remembered that I have some colleagues who don’t see the connection between xAPI and learning.  And it occurred to me that I hadn’t seen a good diagram that helped explain how this all worked.  So I asked and was confirmed in my suspicion. And, of course, I had to take  a stab at it.

xAPIWhat I was trying to capture was how xAPI tracked activity, and that could then be used for insight. I think one of the problems people have is that they think xAPI is a solution all in itself, but it is just a syntax for reporting.

So when A might demonstrate a capability at a particular level, say at the end of learning, or by affirmation from a coach or mentor, that gets recorded in a Learning Record Store. We can see that A and B demonstrated it, and C demonstrated a different level of capability (it could also be that there’s no record for C, or D, or…).

From there, we can compare that activity with results.  Our business intelligence system can provide   aggregated data of performance for A (whatever A is being measured on: sales data, errors, time to solve customer problems, customer satisfaction, etc). With that, we can see if there are the correlations we expect, e.g. everyone who demonstrated  this level of capability has reliably better performance than those who didn’t.  Or whatever you’re expecting.

Of course, you can mine the data too, seeing what emerges.  But the point is that there are a wide variety of things we might track (who touched this job aid, who liked this article, etc), and a wide variety of impacts we might hope for.  I reckon that you should plan what impacts you expect from your intervention, put in checks to see, and then see if you get what you intended.  But we can look at a lot more interventions than just courses. We can look to see if those more active in the community perform better, or any other question tied to a much richer picture than we get other ways.

Ok, so you can do this with your own data generating mechanisms, but standardization has benefits (how about agreeing that red means stop?).  So, first, does this align with your understanding, or did I miss something?  And, second does this help, at all?

When to gamify?

24 February 2016 by Clark Leave a Comment

I’ve had lurking in my ‘to do’ list a comment about doing a post on  when to gamify. In general, of course, I avoid it, but I have to acknowledge there are times when it makes sense.  And someone challenged me to think about what those circumstances are. So here I’m taking a principled shot at it, but I also welcome your thoughts.

To be clear, let me first define what gamification is  to me.  So, I’m a big fan of serious games, that is when you wrap meaningful decisions into contexts that are intrinsically meaningful.  And I can be convinced that there are times when tarting up memory practice with quiz-show window-dressing makes sense, e.g.  when it has to be ‘in the head’.  What I typically refer to as  gamification, however, is where you use external resources, such as scores, leaderboards, badges, and rewards to support behavior you want to happen.

I happened to hear a gamification expert talk, and he pointed out some rules about what he termed ‘goal science’.  He had five pillars:

  1. that  clear goals makes people feel connected and aligns the organization
  2. that working on goals together (in a competitive sense ;) makes them feel supported
  3. that feedback helps people progress in systematic ways
  4. that the tight loop of feedback is more personalized
  5. that choosing challenging goals engages people

Implicit in this is that you do  good goal setting and rewards. You have to have some good alignment to get these points across.  He made the point that doing it badly could be worse than not doing it at all!

With these ground rules, we can think about when it might make sense.  I’ll argue that one obvious, and probably sad case, would be when you don’t have a coherent organization, and people aren’t aware of their role in the organization.  Making up for effective communication isn’t necessarily a good thing, in my mind.

I think it also might make sense for a fun diversion to achieve a short-term goal. This might be particularly useful for an organizational change, when extra motivation could be of assistance in supporting new behaviors. (Say, for moving to a coherent organization. ;) Or some periodic event, supporting say a  philanthropic commitment related to the organization.

And it can be a reward for a desired behavior, such as my frequent flier points.  I collect them, hoping to spend them. I resent it, a bit, because it’s never as good as is promised, which is a worry.  Which means it’s not being done well.

On the other hand, I can’t see using it on an ongoing basis, as it seems it would undermine the intrinsic motivation of doing meaningful work.  Making up for a lack of meaningful work would be a bad thing, too.

So, I recall talking to a guy many moons ago who was an expert in motivation for the workplace. And I had the opportunity to see the staggering amount of stuff available to orgs to reward behavior (largely sales) at an exhibit happening next to our event. It’s clear I’m not an expert, but while I’ll stick to my guns about preferring intrinsic motivation, I’m quite willing to believe that there are times it works, including on me.

Ok, those are my thoughts, what’ve I missed?

The magic question

23 February 2016 by Clark Leave a Comment

A number of years ago, I wrote a paper about design, some of the barriers our cognitive architecture provides, and some heuristics I used to get around them.  I wrote a summary of the paper as four posts, starting here.  I was reminded of one of the heuristics in a conversation, and had a slightly deeper realization that of course I wanted to share.

The approach, which I then called ‘no-limits’ design, has to do with looking at what solution you’d develop if you had no limits. I now think of it as the ‘magic’ approach.  As I mentioned in the post series, this approach asks what you’d design if you had magic (and referred to the famous Arthur C. Clarke quote). And while I indicated one in the past, I think there are two benefits to this approach.

First, if you consider what you’d do if you have magic, you can  help prevent a common problem, premature convergence. Our cognitive architecture has weaknesses, and a couple of them revolve around solving problems in known ways and using tools in familiar ways.  It’s too easy to subconsciously rule out new options.  By asking the ‘magic’ question, we ask ourselves to step outside what we’ve known and believe is possible, and consider the options we’d have if we didn’t have the technological limitations.

Similarly,  using the notion of ‘magic’ can help us explore other models for accomplishing the goal. If design is not just evolutionary, but you also want to explore the opportunities to revolutionize, you need  some way to spark new thinking.  The ability to remove the limitations and explore the core goals facilitates that.

Using this at the wrong time, however, could be problematic. You may have already constrained your thinking too far.  If you consider the design process to be a clear identification of the problem (including the type of design thinking analysis that includes  ethnographic approaches) before looking for solutions, and then considering a wide variety of input data about solutions including other approaches already tried, you’d want this to come after the problem identification but before any other solutions to explore.

Pragmatically, per my previous post, you want to think about your design processes from a point of view of leverage. Having worked through several efforts to improve design with partners and clients, there are clear leverage points that give you the maximum impact on the quality of the learning outcome (e.g. how ‘serious‘ your solution is) for the minimal impact. There are many more small steps that can be integrated that will improve your outcomes, so it helps to look at the process and consider improvement opportunities.  So, are you ready to ask the ‘magic’ question?

Litmos Guest Blog Series

16 February 2016 by Clark Leave a Comment

As I did with Learnnovators, with Litmos I’ve also done a series of posts, in this case a year’s worth.  Unlike the other series, which was focused on deeper eLearning design, they’re not linked thematically and instead cover a wide range of topics that were mutually agreed as being personally interesting and of interest to their argument.

So, we have presentations on:

  1. Blending learning
  2. Performance Support
  3. mLearning: Part 1 and Part 2
  4. Advanced Instructional Design
  5. Games and Gamification
  6. Courses in the  Ecosystem
  7. L&D  and  the Bigger Picture
  8. Measurement
  9. Reviewing Design Processes
  10. New Learning Technologies
  11. Collaboration
  12. Meta-Learning

If any of these topics are of interest, I welcome you to check them out.

 

Badass

10 February 2016 by Clark 1 Comment

That’s the actual title of a book, not me being a bit irreverent.  I’ve been a fan of Kathy Sierra’s since I came across her work, e.g. I  regularly refer to how  she expresses ‘incrementalism‘. She’s on top of usability and learning in very important ways. And she’s got a new book out that I was pleased to read:  Badass: Making Users  Awesome.  So why do I like it?  Because it elegantly intermixes both learning and usability to talk about how to do design right (which I care about; I used to teach interface design besides my focus on learning design), but more importantly that the lessons invoked also apply to learning.

So what’s she doing differently?  She’s taking product design beyond marketing and beyond customer desires.  The premise of the book is that it’s not about the user and not about the product, it’s about the two together making the user more capable in ways they care about. Your audience  should be saying “Look at what I can do”  because of the product, not “I love this product”. This, she argues cogently, is valuable; it trumps just branding, and instead building customer loyalty as an intrinsic outcome of the experience they have.

The argument starts with making the case that it’s about what user goals are, and then figuring out how to get there in ways that systematically develop users’  capability while managing their expectations. Along the way, she talks about being clear on what will occur, and giving them small wins along the way.  And she nicely lays out learning science and motivation research as practical implications.

While she’s more focused on developing complex products with interfaces that remove barriers like cognitive load, and provide incremental capability, this applies to learning as well. We want to get learners to new capabilities in steps that maintain motivation and prevent drop-off. She gets into issues like intermediate skills and how to develop them in ways that optimize outcomes, which is directly relevant to learning design. She cites a wide variety of people in her acknowledgements, include Julie Dirksen and Jane Bozarth in our space, so you know she’s tracking the right folks.

It’s an easy read, too. It’s unusual, paperback but on weighty paper supporting her colorful graphics that illustrate her every point.  There’s at least an equal balance of prose and images if not more on the latter side.  While not focused specifically on learning design, it includes a lot of that but also covers performance support and more in an integrated format that resonates with an overall perspective on a performance ecosystem.

While perhaps not as fundamental as Don Norman’s Design of Everyday Things (which she references; and everyone who designs for anyone else needs to read), it’s a valuable addition to those who want to help people achieve their goals, and that includes product designers, interface designers, and learning experience designers.  If you’re designing a solution for others, whether a mobile app, an authoring tool, a LMS, or other, you  do  need this. If you’re designing learning, you  probably need this. And if you’re designing learning as a business (e.g. designing learning for commercial consumption), I highly recommend giving this a read.

Reactivating Learning

27 January 2016 by Clark Leave a Comment

(I looked  because I’m sure I’ve talked about this before, but apparently not a full post, so here we go.)

If we want our learning to stick, it needs to be spaced out over time. But what sorts of things will accomplish this?  I like to think of three types, all different forms of reactivating learning.

Reactivating learning is important. At a neural level, we’re generating  patterns of activation in conjunction, which strengthens the relationships between these patterns, increasing the likelihood that they’ll get activated when relevant. That’s why context helps as well as concept (e.g. don’t just provide abstract knowledge).  And I’ll suggest there are 3 major categories of reactivation to consider:

Reconceptualization: here we’re talking about presenting a different conceptual model that explains the same phenomena.  Particularly if the learners have had some meaningful activity from your initial learning or through their work, showing a different way of thinking about the problem is helpful. I like to link it to Rand Spiro’s Cognitive Flexibility Theory, and explain that having more ways to represent the underlying model provides more ways to understand the concept to begin with, a greater likelihood that one of the representations will get activated when there’s a problem to be solved, and will activate the other model(s) so there’s a greater likelihood of finding one that leads to a solution.  So, you might think of electrical circuits like water flowing in pipes, or think about electron flow, and either could be useful.  It can be as simple as a new diagram, animation, or just a small prose recitation.

Recontextualization: here we’re showing another example. We’re showing how the concept plays out in a new context, and this gives a greater base upon which to abstract from and comprehend the underlying principle, and providing a new reference that might match a situation they could actually see.   To process it, you’re reactivating the concept representation, comprehending the context, and observing how the concept was used to generate a solution to this situation.  A good example, with a challenging situation that the learner recognizes, a clear goal, and cognitive annotation showing the underlying thinking, will serve to strengthen the learning.  A graphic novel format would be fun, or story, or video, anything that captures the story, thinking, and outcome would work.

Reapplication: this is the best, where instead of consuming a concept model or an example, we actually provide a new practice problem. This should require retrieving the underlying concept, comprehending the context, and determining how the model predicts what will happen to particular perturbations and figuring out which will lead to the desired outcomes.  Practice makes perfect, as they say, and so this should ideally be the emphasis in reactivation.  It might be as simple as a multiple-choice question, though a scenario in many instances would be better, and a sim/game would of course be outstanding.

All of these serve as reactivation. Reactivation, as I’ve pointed out, is a necessary part of learning.  When you don’t have enough chance to practice in the workplace, but it’s important that you have the ability when you need it (and try to avoid putting it in the head if you can), reactivation is a critical tool in your arsenal.

Performance Detective

19 January 2016 by Clark Leave a Comment

I was on a case. I’m a performance detective, and that’s what I do.  Someone wasn’t performing they way they were supposed to, and it was my job to figure out why. My client thought he knew. They always do.  But I had to figure it out myself.  Like always.

Before I hit the bricks, I hit the books. Look, there’s no point watching anyone if you don’t  know what you’re looking for.  What’s this mug supposed to be doing?  So I read up. What’s the job?  What’s the goal?  How do you know when it’s going well? These are questions, and I need answers. So I check it out.  Even better, if I can find numbers.  Can’t always, as some folks don’t really get the value.  Suckers.

Then I had to get a move on.  You need what you find from the background, but you can’t trust it.  There could be  many reasons why this palooka isn’t  up to scratch. Everyone wants to throw a course at it.  And that may not the problem.  If it isn’t a skill problem, it’s not likely a course is going to help.  You’re wasting money.

The mug might  not believe it’s important. Or not want to do it a particular way. There’re lots of reasons not do it the way someone wants. It could be harder, with no obvious benefit.  If you don’t make it clear, why would they?  People aren’t always dumb, it just seems that way.

Or they might not have what they need.  Too often, some well-intentioned but under-aware designers wants to put some arbitrary information in their heads.  Which is hard. And usually worthless.  Put in the world. Have it to hand.  They may need a tool, not a knowledge dump.

Or, indeed, they may not be capable. A course could be the answer. Not just a course, of course. It needs more. Coaching, and practice. Lots of practice.  They may really be out of their depth, and dumping knowledge on them is only going to keep them drowning.

It’s not always easy. It may not be a simple answer. There can be multiple problems. It can be all of the above.  Or any combination. And that’s why they bring me in. To get the right answer, not the easy answer. And certainly not the wrong answer.

So I had to go find out what was really going on.  That’s what detectives do. They watch. They investigate. They study.  That’s what I do. I want the straight dope. If you can’t do the hard yards, you’re in the wrong job.  I love the job. And I’m good at it.

So I watched. And sure enough, there it was. Obvious, really. In retrospect. But you wouldn’t have figured it out if you hadn’t looked.  It’s not my job to fix it.  I told the client what I found.  That’s it.  Not my circus, not my monkeys. Get an architect to come up with a solution. I find the problem, and report. That’s what I do.

This quite literally came from a dream I  had, and my subsequent thoughts when I woke up.  And when I first conceived it, I wasn’t thinking about the role that Charles Jennings, Jos Arets, and Vivian Heijnen have as one of  five in their new 70:20:10 book, but  there is a nice resonance.  Hopefully my ‘hard boiled’ prose isn’t too ‘on the nose’!  More importantly, what did I miss? I welcome  your thoughts and feedback.

Working wiser?

12 January 2016 by Clark Leave a Comment

Noodling:   I’ve been thinking about Working Smarter, a topic I took up over four years ago.  And while I still think there’s too little talk about it, I wondered also about pushing it further.  I also talked in the past about an interest in wisdom, and what that would mean for learning.  So what happens when they come together?

Working smarter, of course, means recognizing how we  really think, work, and learn, and aligning our processes and tools accordingly. That includes recognizing that we  do use external representations, and ensuring that the ones we want in the world are there, and we also support people being able to create their own. It means tapping into the power of people, and creating ways for them to get together and support one another through both communication and collaboration.  And, of course, it means using Serious learning design.

But what, then, does working  ‘wiser’ mean?  I like Sternberg’s model of wisdom, as it’s actionable (other models are not quite specific enough).  It talks about taking into account several levels of  caring about others, several time scales, several levels of action, and all influenced by an awareness of values.  So how do we work that into practices and tools?

Well, pragmatically, we can provide rubrics for evaluation of ideas that include considerations of others inside and outside your circles of your acquaintances, and in short- and long-term timeframes, and the impacts on existing states of affairs, ultimately focusing on the common good. So we can have job aids that provide guidance,  or bake it into our templates.  These, too, can be shown in collaboration tools, so the outputs will reflect these values.  But there’s another approach.

But, at core, it’s really about what you value, and that becomes about culture.  What values does the organization care about?  Do employees know about the organization’s ultimate goal and role?  Is it about short-term shareholder return, or some contribution to society?  I’m reminded about the old statements about whether you’re about selling candles or providing light.  And do employees know how what they do fits in?

It’s pretty clear that the values implicit in  steps to make workplaces more effective are really about making workplaces more humane, that is: respecting our inherent nature.  And movements like this, that provide real meaning, ongoing support, freedom of approach, and time for reflection, are to me about working not just smarter but also wiser.

We can work smarter with tools and practices, but I think we can work better, wiser, with an enlightened approach to who we are working with and how we work to deliver real value to not only customers but to society.  And, moreover, I think that doing so would yield better organizational  outcomes.

Ok, so have I gone off the edge of the hazy cosmic jive?  I am a native Californian, after all, but I’m thinking that this makes real business sense.  I think we can do this, and that the outputs will be better too, in all respects.  No one says it’d be easy, but my suspicion is it’d be worthwhile.

2015 Reflections

31 December 2015 by Clark 3 Comments

It’s the end of the year, and given that I’m an advocate for the benefits of reflection, I suppose I better practice what I preach. So what am I thinking I learned as a consequence of this past year?  Several things come to mind (and I reserve the right for more things to percolate out, but those will be my 2016 posts, right? :):

  1. The Revolution  is real: the evidence mounts that there is a need for change in L&D, and when those steps are taken, good things happen. The latest  Towards Maturity report shows that the steps taken by their top-performing organizations are very much about aligning with business,  focusing on performance, and more.  Similarly, Chief Learning Officer‘s Learning Elite Survey similarly point out to making links across the organization and measuring outcomes.  The data supports the principled observation.
  2. The barriers are real: there is continuing resistance to the most obvious changes. 70:20:10, for instance, continues to get challenged on nonsensical issues like the exactness of the numbers!?!?  The fact that a Learning Management System is not a strategy still doesn’t seem to have penetrated.  And so we’re similarly seeing that other business units are taking on the needs for performance support, social media, and ongoing learning. Which is bad news for L&D, I reckon.
  3. Learning design is  rocket science: (or should be). The perpetration of so much bad elearning continues to be demonstrated at exhibition halls around the globe.  It’s demonstrably true that tarted up information presentation and knowledge test isn’t going to lead to meaningful behavior change, but we still are thrusting people into positions without background and giving them tools that are oriented at content presentation.  Somehow we need to do better. Still pushing the Serious eLearning Manifesto.
  4. Mobile is well on it’s way: we’re seeing mobile becoming mainstream, and this is a good thing. While we still hear the drum beating to put courses on a phone, we’re also seeing that call being ignored. We’re instead seeing real needs being met, and new opportunities being explored.  There’s still a ways to go, but here’s to a continuing awareness of good mobile design.
  5. Gamification is still being confounded: people aren’t really making clear conceptual differences around games. We’re still seeing linear scenarios confounded with branching, we’re seeing gamification confounded with serious games, and more.  Some of these are because the concepts are complex, and some because of vested interests.
  6. Games  seem to be reemerging: while the interest in games became mainstream circa 2010 or so, there hasn’t been a real sea change in their use.  However, it’s quietly feeling like folks are beginning to get their minds around Immersive Learning Simulations, aka Serious Games.   There’s still ways to go in really understanding the critical design elements, but the tools are getting better and making them more accessible in at least some formats.
  7. Design is becoming a ‘thing’: all the hype around Design Thinking is leading to a greater concern about design, and this is a good thing. Unfortunately there will probably be some hype and clarity to be discerned, but at least the overall awareness raising is a good step.
  8. Learning to learn seems to have emerged: years ago the late great Jay Cross and I and some colleagues put together the Meta-Learning Lab, and it was way too early (like so much I touch :p). However, his passing has raised the term again, and there’s much more resonance. I don’t think it’s necessarily a  thing yet, but it’s far greater resonance than we had at the time.
  9. Systems are coming: I’ve been arguing for the underpinnings, e.g. content systems.  And I’m (finally) beginning to see more interest in that, and other components are advancing as well: data  (e.g. the great work Ellen Wagner and team have  been doing on Predictive Analytics), algorithms (all the new adaptive learning systems), etc. I’m keen to think what tags are necessary to support the ability to leverage open educational resources as part of such systems.
  10. Greater inputs into learning: we’ve seen learning folks get interested in behavior change, habits, and more.  I’m thinking we’re going to go further. Areas I’m interested in include myth and ritual, powerful shapers of culture and behavior. And we’re drawing on greater inputs into the processes as well (see 7, above).  I hope this continues, as part of learning to learn is to look to related areas and models.

Obviously, these are things I care about.  I’m fortunate to be able to work in a field that I enjoy and believe has real potential to contribute.  And just fair warning, I’m working on a few areas  in several ways.  You’ll see more about learning design and the future of work sometime in the near future. And rather than generally agitate, I’m putting together two specific programs – one on (e)learning quality and one on L&D strategy – that are intended to be comprehensive approaches.  Stay tuned.

That’s my short list, I’m sure more will emerge.  In the meantime, I hope you had a great 2015, and that your 2016 is your best year yet.

Scenarios and Conceptual Clarity

10 December 2015 by Clark 5 Comments

I recently came across an article ostensibly about branching scenarios, but somehow the discussion largely missed the point.  Ok, so I can be a stickler for conceptual clarity, but I think it’s important to distinguish between different types of scenarios and their relative strengths and weaknesses.

So in my book  Engaging Learning, I was looking to talk about how to make engaging learning experiences.  I was pushing games (and still do) and how to design them, but I also wanted to acknowledge the various approximations thereto.  So in it, I characterized the differences between what I called mini-scenarios, linear scenarios, and contingent scenarios (this latter is what’s traditionally called branching scenarios).  These are all approximations to full games, with various tradeoffs.

At core, let me be clear, is the need to put learners in situations where they need to make decisions. The goal is to have those decisions closely mimic the decisions they need to make  after the learning experience. There’s a context (aka the story setting), and then a specific situation triggers the need to make a decision.  And we can deliver this in a number of ways. The ideal is a simulation-driven (aka model-driven or engine-driven) experience.  There’s  a model of the world underneath that calculates the outcomes of your action and determines whether you’ve yet achieved success (or failure), or generates  a new opportunity to act.  We can (and should) tune this into a serious game.  This gives us deep experience, but the model-building is challenging and there are short cuts.

MiniScenarioIn  mini-scenarios, you put the learner in a setting with a situation that precipitates a decision.  Just one, and then there’s feedback.   You could use video, a graphic novel format, or just prose, but the game problem is a setting and a situation, leading to choices. Similarly, you could have them respond by selecting option A B or C, or pointing to the right answer, or whatever.  It stops there. Which is the weakness, because in the real world the consequences are typically more complex than this, and it’s nice off the learning experience reflects that reality.  Still, it’s better than knowledge test.  Really, these are  just a better written multiple choice question, but that’s at least a start!

LinearScenarioLinear scenarios are a bit more complex. There are a series of game problems in the same context, but whatever the player  chooses, the right decision is ultimately made, leading to the next problem. You use some sort of sleight of hand, such as “a supervisor catches the mistake and rectifies it, informing you…” to make it all ok.  Or, you can terminate out and have to restart if you make the wrong decision  at any point.  These are a step up in terms of showing the more complex consequences, but are a bit unrealistic.  There’s some learning power here, but not as much as is possible.  I have used them as sort of multiple mini-scenarios with content in between, and  the same story is used for the next choice, which at least made a nice flow. Cathy Moore  suggests  these  are valuable for novices, and I think it’s also useful if everyone needs to receive the same ‘test’ in some accreditation environment to be fair and balanced (though in a competency-based world they’d be better off with the full game).

BranchingScenarioThen there’s the full branching scenario (which I called contingent scenarios in the book, because the consequences and even new decisions are contingent on your choices).  That is, you see different opportunities depending on your choice. If you make one decision, the subsequent ones are different.  If you don’t shut down the network right away, for instance, the consequences are different (perhaps a breach) than if you do (you get the VP mad).  This, of course, is much  more like the real world.  The only difference between this and a serious  game is that the contingencies in the world are hard-wired in the branches, not captured in a separate model (rules and variables). This  is easier, but it gets tough to track if you have too many  branches. And the lack of an engine  limits the replay and ability to have randomness.  Of course, you can make several of these.

So the problem I had with the article  that triggered this post is that their generic model looked like a mini-scenario, and nowhere did they show the full concept of a real branching scenario. Further,  their example was really a linear scenario, not a branching scenario.  And I realize this may seem like an ‘angels dancing on the head of a pin’, but I think it’s important to make distinctions when they affect the learning outcome, so you can more clearly make a choice that reflects the goal you are trying to achieve.

To their credit, that they  were pushing for contextualized decision making at all is a major win, so I don’t want to quibble too much.  Moving our learning practice/assessment/activity to more contextualized performance is a good thing.  Still, I  hope this elaboration is useful  to get more nuanced solutions.  Learning design really can’t be treated as a paint-by-numbers exercise, you really should know what you’re doing!

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.