Clark Quinn's Learnings about Learning
(The Official Quinnovation blog)

25 February 2015

mLearning more than mobile elearning?

Clark @ 6:17 AM

Someone tweeted about their mobile learning credo, and mentioned the typical ‘mlearning is elearning, extended’ view. Which I rejected, as I believe mlearning is much more (and so should elearning be).  And then I thought about it some more.  So I’ll lay out my thinking, and see what you think.

I have been touting that mLearning could and should be focused, as should P&D, on anything that helps us achieve our goals better. Mobile, paper, computers, voodoo, whatever technology works.  Certainly in organizations.  And this yields some interesting implications.

So, for instance, this would include performance support and social networks.  Anything that requires understanding how people work and learn would be fair game. I was worried about whether that fit some operational aspects like IT and manufacturing processes, but I think I’ve got that sorted.  UI folks would work on external products, and any internal software development, but around that, helping folks use tools and processes belongs to those of us who facilitate organizational performance and development.  So we, and mlearning, are about any of those uses.

But the person, despite seeming to come from an vendor to orgs, not schools, could be talking about schools instead, and I wondered whether mLearning for schools, definitionally, really is about only supporting learning.  And I can see the case for that; that mlearning in education is about using mobile to help people learn, not perform.  It’s about collaboration, for sure, and tools to assist.

Note I’m not making the case for schools as they are, a curriculum rethink definitely needs to accompany using technology in schools in many ways.  Koreen Pagano wrote this nice post separating Common Core teaching versus assessment, which goes along with my beliefs about the value of problem solving.  And I also laud Roger Schank‘s views, such as the value (or not) of the binomial theorem as a classic example.

But then, mobile should be a tool in learning, so it can work as a channel for content, but also for communication, and capture, and compute (e.g. the 4C’s of mlearning).  And the emergent capability of contextual support (the 5th C, e.g. combinations of the first four).  So this view would argue that mlearning can be used for performance support in accomplishing a meaningful task that’s part of an learning experience.

That would take me back to mlearning being more than just mobile elearning, as Jason Haag has aptly separated.  Sure, mobile elearning can be a subset of mlearning, but not the whole picture. Does this make sense to you?

24 February 2015

Making ‘sense’

Clark @ 8:19 AM

I recently wrote about wearables, where I focused on form factor and information channels.  An article I recently read talked about a guy who builds spy gear, and near the end he talked about some things that started me thinking about an extension of that for all mobile, not just wearables.  The topic is  sensors.

In the article, he talks about how, in the future, glasses could detect whether you’ve been around bomb-making materials:

“You can literally see residue on someone if your glasses emit a dozen different wavelengths of microlasers that illuminate clothing in real time and give off a signature of what was absorbed or reflected.”

That’s pretty amazing, chemical spectrometry on the fly.  He goes on to talk about distance vision:

“Imagine you have a pair of glasses, and you can just look at a building 50 feet away, 100 feet away, and look right through the building and see someone moving around.”

 Now, you might nor might not like what he’s doing with that, but imagine applying it elsewhere: identifying where people are for rescue, or identifying materials for quality control.

Heck, I’d find it interesting just to augment the camera with infrared and ultraviolet: imagine being able to use the camera on your phone or glasses to see what’s happening at night, e.g. wildlife (tracking coyotes or raccoons, and managing to avoid skunks!).  Night vision, and seeing things that fluoresce under UV would both be really cool additions.

I’d be interested too in having them able to work to enlarge as well, bring small things to light like a magnifying glass or microscope.

It made me think about all the senses we could augment. I was thinking about walking our dogs, and how their olfactory life is much richer than ours.  They are clearly sensing things beyond our olfactory capabilities, and it would be interesting to have some microscent detectors that could track faint traces to track animals (or know which owner is not adequately controlling a dog, ahem).  They could potentially serve as smoke or carbon monoxide detectors also.

Similarly, auditory enhancement: could we hear things fainter than our ears detect, or have them serve as a stethoscope?  Could we detect far off cries for help that our ears can’t? Of course, that could be misused, too, to eavesdrop on conversations.  Interesting ethical issues come in.

And we’ve already heard about the potential to measure one’s movement, blood pressure, pulse, temperature, and maybe even blood sugar, to track one’s health.  The fit bands are getting smarter and more capable.

There is the possibility for other things we personally can’t directly track: measuring ambient temperatures quantitatively, and air pressure are both already possible and in some devices.  The thermometer could be a health and weather guide, and a barometer/altimeter would be valuable for hiking in addition to weather.

The combination of reporting these could be valuable too.  Sensor nets, where the data from many micro sensors are aggregated have interesting possibilities. Either with known combinations, such as aggregating temperature and air pressure  help with weather, or machine learning  where for example we include sensitive motion detectors,  and might be able to learn to predict earthquakes like supposedly animals can.  Sounds too could be used to triangulate on cries for help, and material detectors could help locate sources of pollution.

We’ve done amazing things with technology, and sensors are both shrinking and getting more powerful. Imagine having sensors scattered about your body in various wearables and integrating that data in known ways, and agreeing for anonymous aggregation for data mining.  Yes, there are concerns, but benefits too.

We can put these together in interesting ways, notifications of things we should pay attention to, or just curiosity to observe things our natural senses can’t detect.  We can open up the world in powerful ways to support being more informed and more productive.  It’s up to us to harness it in worthwhile ways.

17 February 2015

Engage, yea or nay?

Clark @ 8:20 AM

In a recent chat, a colleague I respect said the word ‘engagement’  was anathema.  This surprised me, as I’ve been quite outspoken about the need for engagement (for one small example, writing a book about it!).  It may be that the conflict is definitional, for it appeared that my colleague and another respondent viewed engagement as bloating the content, and that’s not what I mean at all. So I thought I lay out what mean when I say engaging, and why I think it’s crucial.

Let’s be clear what I don’t mean.  If you think by engagement it’s adding in extra stuff, we’re using a very different definition of engagement.  It’s not about tarting up uninteresting stuff with ‘fun’ (e.g. racing themed window dressing on knowledge test).  It’s not about putting in unnecessary unrelated imagery, sounds, or anything else.  Heck, the research of Dick Mayer at UCSB shows this actually hinders learning!

So what do I mean?  For one thing, stripping away any ‘nice to have’ or unnecessary info.  Lean is engaging!  You have to focus on what really will help the learners, and in ways that they get.  And they do.  And then help them in the ‘in the ways they get’ bit.

You need contextualized practice.  Engaging is making the context meaningful to the learners.  You need contextualization (e.g research by John Bransford on anchored cognition), but arbitrary contextualization isn’t as good as intrinsically interesting contexts.  This isn’t window dressing, since you need to be doing it anyway, but do it. And in a minimal style (as de Saint-Exupery said: “Perfection is finally attained not when there is no longer anything to add but when there is no longer anything to take away…”).

You want compelling examples. We know that examples lead to better learning (ala, for instance John Sweller’s work on cognitive load), but again, making them meaningful to the learners is critical. This isn’t window dressing, as we need them, but they’re better if they’re well told as intrinsically interesting stories.

Finally, we need to introduce the learning.  Too often we do this in ways that the learner doesn’t get the WIIFM (What’s In It For Me).  Learners learn better when they’re emotionally open to the content instead of uninterested. This may be a wee bit more, but we can account for this by getting rid of the usual introductory stuff.  And it’s worth it.

Now, let’s be clear, this is for when we’ve deemed formal learning as necessary. When the audience is practitioners who know what they need and why it’s important, then giving them ‘just the facts’, performance support, is sufficient.  But if it’s new skills they need, when you need a learning experience, then you want to make it engaging. Not extrinsically, but intrinsically.  And that’s not more in quantity, it’s not bloated, it’s more in quality, in minimalism for content and maximal for immersion.

Engaging learning is a good thing, a better thing than not, the right thing.  I’m hoping it’s just definitional, because I can’t see the contrary argument unless there’s confusion over what I mean.  Anyone?

11 February 2015

Rethinking Redux

Clark @ 9:04 AM

Last week I wrote about Rethinking, how we might want and need to revise our approaches, and showed a few examples of folks thinking out of the box and upending our cherished viewpoints.  I discovered another one (much closer to ‘home’) and tweeted it out, only to get a pointer to another.  I think it’s worth looking at these two examples that help make the point that maybe it’s time for a rethink of some of our cherished beliefs and practices.

The first was a pointer from a conversation I had with the proprietor of an organization with a new mobile-based coaching engine.  Among the things touted was that much of our thinking about feedback appears to be wrong.  I was given a reference and found an article that indeed upends our beliefs about the benefits of feedback.

The article investigates performance reviews, and finds them lacking, citing one study that found:

“a meta-analysis of 607 studies of performance evaluations and concluded that at least 30% of the performance reviews ended up in decreased employee performance.”

30% decrease performance?  And that’s not including the others that are just neutral.  That’s a pretty bad outcome!  Worse, the Society for Human Resource Management is cited as stating  “90% of performance appraisals are painful and don’t work“.  In short, one of the most common performance instruments is flawed.

As a consequence of tweeting this out, a respondent pointed to another article that he was reminded of.  This one upends the notion that we’re good at rating others’ behavior: “research has demonstrated that each of us is a disturbingly unreliable rater of other people’s performance”.  That is, 360 degree reviews, manager reviews, etc., are fundamentally based upon review by others, and they’re demonstrably bad at it.  The responses given have reliable biases that makes the data invalid.

As a consequence, again, we cannot continue as we are:

“we must first stop, take stock, and admit to ourselves that the systems we currently use to reveal our people only obscure them”

This is just like learning styles: there’s no reliable data that it works, and the measurement instrument used is flawed. In short, one of the primary tools for organizational improvement is fundamentally broken.  We’re using industrial age tools in an information age.

What’s a company to do?  The first article quoted Josh Bersin when saying “companies need to focus very heavily on ‘collaboration, professional development, coaching and empowering people to do great things’“.  This is the message of the Internet Time Alliance and an outflow of the Coherent Organization model and the L&D Revolution.  There are alternatives that are more respectful of how people really think, work, and learn, and consequently more effective.  Are you ready to rethink?


10 February 2015

The Grail of Effective and Engaging Learning Experiences

Clark @ 8:08 AM

There’s a considerable gap between what we can be doing, and what we are doing.  When you look at what’s out there, we see that there are several way in which we fall short of the mark.  While there are many dimensions that could be considered, for the sake of simplicity let’s characterize the two important ones as effectiveness of our learning and the engagement of the experience.  And I want to characterize where we are and where we could be, and the gaps we need to bridge.

GrailEffectiveEngagingLearningIf we map the space, we see that the lower left is the space of low engagement and low effectiveness.  Too much elearning resides there.  Now, to be fair, it’s easy to add engaging media and production values, so the space of typical elearning does span from low to high engagement. Moving up the diagram, however, towards increasing effectiveness, is an area that’s less populated.  The red line separates the undesirable areas from the space we’d like to start hitting, where we begin to have some modicum of both effectiveness and engagement, moving towards the upper right.  This space is relatively sparsely populated, I’m afraid.  And while there are instances of content that do increase the effectiveness, there’s little that really hits the ultimate goal, the holy grail, with a fully integrated effective and engaging experience is achieved.

How do we move in the right direction? I’ve talked before about trying to hit the sweet spot of maximal effectiveness within pragmatic constraints.  Certainly from an effectiveness standpoint, you should be looking at the components of the Serious eLearning Manifesto.  To get effective learning, you need a number of elements, for instance:

  • meaningful practice: practice aligned with the real world task
  • contextualized practice: learning across contexts that support transfer
  • sustained practice: sufficient and increasingly challenging practice to develop the skills to the necessary level
  • spaced practice: practice spread out over time (brains need sleep to learn more than a certain threshold)
  • real world consequences providing feedback coupled with scaffolded reflection
  • model-based guidance: the best guide for practice is a conceptual basis (not rote information)
  • appropriate examples: that show the concepts being applied in context

Some of these elements, also contribute to engagement, as well as others.  Components include:

  • learning-centered contexts: problems learners recognize as important
  • learner-centered contexts: problems learners want to solve
  • emotionally engaging introductions: hooking learners in viscerally as well as cognitively
  • adapted challenge: ramping up the challenge appropriately to avoid both boredom and frustration
  • unpredictability: maintaining the learner’s attention through surprise
  • meaningfulness: learners playing roles they want to be in
  • drama and/or humor

The integration of these elements was the underlying premise behind Engaging Learning, my book on integrating effectiveness and engagement, specifically on making meaningful practice, e.g. serious games.  Serious games are one way to achieve this end, by contextualizing practice as decisions in a meaningful environment and using a game engine to adapt the challenge and providing essentially unlimited practice.

Other approaches achieve much of this effectiveness in different ways. Branching scenarios are powerful approximations to this by showing consequences in context but with limited replay, and so are constructivist and problem-based learning pedagogies. This may sound daunting, but with practice, and some shortcuts, this is doable.

For example, Socratic Arts has a powerful online pedagogy that leverages media and a constructivist pedagogy in a relatively simple framework. The learner is given ‘assignments’ that mirror real world tasks, via emails or videos of characters playing roles such as a boss.  The outputs required similarly mimic work products you might find in this area. Scaffolding is available in a couple of ways. For one, there are guidelines about Videos of experts and documents are available as resources, to support the learner in getting the best outcome.  While it’s low on fancy visual design, it’s effective because it’s closely aligned to the needed skills post-learning.  And the cognitive challenge is pitched at the right level to engage the intellect, if not the aesthetics.  This is a cost-effective balance.

The work I did with the Wadhwani Foundation hit a slightly different spot in trying to get to the grail.  I didn’t have the ability to work quite as tightly with the SMEs from the get-go, and we didn’t have the ability to simulate the hands-on tasks as well as we’d like,  but we did our best to infer real tasks and used low-tech simulations and scenarios to make it effective.  We did use more media, animations and contextualized videos, to make the experience more engaging and effective as well.

The point being that we can start making learning more effective and engaging in practical ways. We need to make it effective, or why bother?  We should make it engaging, to optimize the outcomes and not insult our learners. And we can.  So why don’t we?

5 February 2015

Agile Bay Area #LNDMeetup Mindmap

Clark @ 8:05 AM

I’ve been interested in process, so I attended this month’s Bay Area Learning Design Meetup that showcased LinkedIn’s work on Agile using Scrum for learning design. It was very nice of them to share the specifics of their process, and while there were more details than time permitted to cover, it was a great beginning to understand the differences.

Basically, a backlog is kept of potential new projects.  They’re prioritized and a subset is chosen as the basis of the sprint and put on the board.  Then for two weeks they work on hitting the elements on the board, with a daily standup meeting to present where they’re at and synchronize.  At the end they demo to the stakeholders and reflect.  As part of the reflection, they’re supposed to change something for the next iteration.

There’re different roles: a project owner who’s the ‘client’ in a sense (and a relation to who may be the end client).  There is a Scrum master who’s responsible for facilitating the group through the steps, and then the team, which should be small but at least represent all the necessary roles to execute whatever is being accomplished.

When I asked about scope, they said that they’ve found they can do about 100 story points (which are empirical) in a sprint, and they may distribute that across some elearning, some job aids, whatever.  They didn’t seem too eager to try to quantify that relative to other known metrics, and I understand it’s hard, particularly in the time they had.  Here’s the Mindmap:



Allen Interactions also discussed their SAM project (which I know and like), but the mind map didn’t match too well to their usual diagram (only briefly shown at the end), and I ran out of time trying to remedy. It’s better just to look at the diagram ;).


3 February 2015


Clark @ 7:58 AM

(in the future)
Dr. Melik: You mean there was no deep fat? No steak or cream pies? Or hot fudge?

Dr. Agon: Those were thought to be unhealthy, precisely the opposite of what we now know to be true.

In Woody Allen’s Sleeper about someone who wakes up in the future, one of the jokes is that all the things we thought were true are turned on their head.  I was talking with my colleague Jay Cross in terms of why we’re not seeing more uptake of the opportunities for L&D to move out of the industrial age, and one of the possible explanations is satisfaction with the status quo. And I was reminded of several articles I’ve read that support the value of rethinking.

In Sweden, on principled reasons they decided that the model of prosecuting the prostitute wasn’t fair. She was, they argued, a victim. Instead, they decided to punish the solicitation of the service, a complete turn around from the previous approach.  It has reduced sex trafficking, for one outcome. Other countries are now looking at their model and some have already adopted it.

In Portugal, which was experiencing problems with drugs, they took the radical step of decriminalizing them, and setting them up with treatment.  While it’s not a panacea, it has not led to the massive increase in usage that was expected.  Which is a powerful first step.  It may be a small step toward undoing some of the misconceptions about addiction which may be emerging.

And in Denmark there was an experiment in doing away with road signs. The premise was that folks with regulations will trust the regulations to work. If you remove them, they have to go back to assessing the situation, and that they’ll drive safer.  It appears, indeed, to be the case.

I could go on: the food pyramid, cubicles… more and more ideas are being shown to be misguided if not out and out wrong.  And the reason I raise this is to suggest that complacency about anything, accepting the received wisdom, may not be helpful.  Patti Shank recently wrote about the burden of having an informed opinion, and I think we need to take ownership of our beliefs, and I think that’s right.

There are lots of approaches to get out of the box: appreciative inquiry, positive deviance, double loop learning, the list goes on.  Heck, there’s even the silly and overused but apt cliche about the definition of insanity. The point being that regular reflection is part of being a learning organization.   You need to be looking at what you’re doing, what others are doing, and what others are saying.  Continual improvement is part of the ongoing innovation that today’s organization needs to thrive.

Yes, we can’t query everything, but if we have an area of responsibility, e.g. in charge of learning strategy, we owe it to know what  alternative approach might be. And we certainly should be looking at what we’re doing and what impact it’s having.  Measuring just efficiency instead of impact?  Being an order taker and not investigating the real cause?  Not looking at the bigger picture?  Ahem.  I am positing, via the Revolution,  that L&D isn’t doing near what it could and should, and we are via the Manifesto that what it is doing, it is doing badly.  So, what’s the response?  I’ve done the research to suggest that there’s a need for a rethink, and I’m trying to foster it. So where do we go from here?  Where do you go from here?  Steak, anyone?


Powered by WordPress