Clark Quinn's Learnings about Learning
(The Official Quinnovation blog)

13 September 2017

Why AR

Clark @ 8:07 AM

Perhaps inspired by Apple’s focus on Augmented Reality (AR), I thought I’d take a stab at conveying the types of things that could be done to support both learning and performance. I took a sample of some of my photos and marked them up.  I’m sure there’s lots more that could be done (there were some great games), but I’m focusing on simple information that I would like to see. It’s mocked up (so the arrows are hand drawn), so understand I’m talking concept here, not execution!


Here, I’m starting small. This is a photo I took of a flower on a walk. This is the type of information I might want while viewing the flower through the screen (or glasses).  The system could tell me it’s a tree, not a bush, technically (thanks to my flora-wise better half).  It could also illustrate how large it is.  Finally, the view could indicate that what I’m viewing is a magnolia (which I wouldn’t have known), and show me off to the right the flower bud stage.

The point is that we can get information around the particular thing we’re viewing. I might not actually care about the flower bud, so that might be filtered out, and it might instead talk about any medicinal uses.  Also, it could be dynamic, animating the process of going from bud to flower and falling off. It could also talk about the types of animals (bees, hummingbirds, ?) that interact with it, and how. It would be dependent on what want to learn.  And, perhaps, with some additional incidental information on the periphery of my interests, for serendipity.

Neighborhood viewGoing wider, here I’m looking out at a landscape, and the overlay is providing directions. Downtown is straight ahead, my house is over that ridge, and infamous Mt. Diablo is off to the left of the picture. It could do more, pointing out that the green ridges are grapes, provide the name of the neighborhood that’s in the foreground (I call it Stepford Downs, after the movie ;).

Dynamically, of course, if I moved the camera to the left, Mt. Diablo would get identified when it sprung into view.  As we moved around, we’d point to the neighboring towns in view, and in the direction of further towns blocked by mountain ranges.  We should or could also identify the river flowing past to the north.  And we could instead focus on other information: infrastructure (pipes and electricity), government boundaries, whatever’s relevant could be filtered in or out.

Road picAnd in this final example, taken from the car on a trip, AR might indicate some natural features. Here I’ve pointed to the clouds (and indicated the likelihood of rain). Similarly, I’ve identified the rock and the mechanism of shaping. (These are all made up, they could be wrong; Mt Faux definitely is!)  We might even be able to touch on a label and have it expand.

Similarly, as we moved, information would change as we viewed different areas. We might even animate what the area looked like hundreds of thousands of years ago and how it’s changed.  Or we could illustrate coming changes. It could instead show boundaries of counties or parks, types of animals, or other relevant information.

The point here is that annotating the world, a capability AR has, can be an amazing learning tool. If I can specify my interests, we can capitalize on them to develop. And this is as an adult. Think about doing this for kids, layering on information in their Zone of Proximal Development and interests!  I know VR’s cool, and has real learning potential, but there you have to create the context. Here we’re taking advantage of it. That may be harder, but it’s going to have some real upsides when it can be done ubiquitously.

20 July 2017

Augmented Reality Lives!

Clark @ 8:07 AM

Visually Augmented RealityAugmented Reality (AR) is on the upswing, and I think this is a good thing. I think AR makes sense, and it’s nice to see both solid tool support and real use cases emerging.  Here’s the news, but first, a brief overview of why I like AR.

As I’ve noted before, our brains are powerful, but flawed.  As with any architecture, any one choice will end up with tradeoffs. And we’ve traded off detail for pattern-matching.  And, technology is the opposite: it’s hard to get technology to do pattern matching, but it’s really good at rote. Together, they’re even more powerful. The goal is to most appropriately augment our intellect with technology to create a symbiosis where the whole is greater than the sum of the parts.

Which is why I like AR: it’s about annotating the world with information, which augments it to our benefit.  It’s contextual, that is, doing things because of when and where we are.  AR augments sensorily, either auditory or visual (or kinesthetic, e.g. vibration).  Auditory and kinesthetic annotation is relatively easy; devices generate sounds or vibrations (think GPS: “turn left here”).  Non-coordinated visual information, information that’s not overlaid visually, is presented as either graphics or text (think Yelp: maps and distances to nearby options).  Tools already exist to do this, e.g. ARIS.  However, arguably the most compelling and interesting is the aligned visuals.

Google Glass was a really interesting experiment, and it’s back.  The devices – glasses with camera and projector that can present information on the glass – were available, but didn’t do much because of where you were looking. There were generic heads-up displays and camera, but little alignment between what was seen and what was consequently presented to the user with additional information.  That’s changed. Google Glass has a new Enterprise Edition, and it’s being used to meet real needs and generate real outcomes. Glasses are supporting accurate placement in manufacturing situations requiring careful placement.  The necessary components and steps are being highlighted on screen, and reducing errors and speeding up outcomes.

And Apple has released it’s Augmented Reality software toolkit, ARKit, with features to make AR easy.  One interesting aspect is built-in machine learning, which could make aligning with objects in the world easy!  Incompatible platforms and standards impede progress, but with Google and Apple creating tools for each of their platforms, development can be accelerated. (I hope to find out more at the eLearning Guild’s Realities 360 conference.)

While I think Virtual Reality (VR) has an important role to play for deep learning, I think contextual support can be a great support for extending learning (particularly personalization), as well as performance support.  That’s why I’m excited about AR. My vision has been that we’ll have a personal coaching system that will know where and when we are and what our goals are, and be able to facilitate our learning and success. Tools like these will make it easier than ever.

27 June 2017

FocusOn Learning reflections

Clark @ 8:08 AM

If you follow this blog (and you should :), it was pretty obvious that I was at the FocusOn Learning conference in San Diego last week (previous 2 posts were mindmaps of the keynotes). And it was fun as always.  Here are my reflections on what happened a bit more, as an exercise in meta-learning.

There were three themes to the conference: mobile, games, and video.  I’m pretty active in the first two (two books on the former, one on the latter), and the last is related to things I care and talk about.  The focus led to some interesting outcomes: some folks were very interested in just one of the topics, while others were looking a bit more broadly.  Whether that’s good or not depends on your perspective, I guess.

Mobile was present, happily, and continues to evolve.  People are still talking about courses on a phone, but more folks were talking about extending the learning.  Some of it was pretty dumb – just content or flash cards as learning augmentation – but there were interesting applications. Importantly, there was a growing awareness about performance support as a sensible approach.  It’s nice to see the field mature.

For games, there were positive and negative signs.  The good news is that games are being more fully understood in terms of their role in learning, e.g. deep practice.  The bad news is that there’s still a lot of interest in gamification without a concomitant awareness of the important distinctions. Tarting up drill-and-kill with PBL (points, badges, and leaderboards; the new acronym apparently) isn’t worth significant interest!  We know how to drill things that must be, but our focus should be on intrinsic interest.

As a side note, the demise of Flash has left us without a good game development environment. Flash is both a development environment and a delivery platform. As a development environment Flash had a low learning threshold, and yet could be used to build complex games.  As a delivery platform, however, it’s woefully insecure (so much so that it’s been proscribed in most browsers). The fact that Adobe couldn’t be bothered to generate acceptable HTML5 out of the development environment, and let it languish, leaves the market open for another accessible tool. And Unity or Unreal provide good support (as I understand it), but still require coding.  So we’re not at an easily accessible place. Oh, for HyperCard!

Most of the video interest was either in technical issues (how to get quality and/or on the cheap), but a lot of interest was also in interactive video. I think branching video is a real powerful learning environment for contextualized decision making.  As a consequence the advent of tools that make it easier is to be lauded. An interesting session with the wise Joe Ganci (@elearningjoe) and a GoAnimate guy talked about when to use video versus animation, which largely seemed to reflect my view (confirmation bias ;) that it’s about whether you want more context (video) or concept (animation). Of course, it was also about the cost of production and the need for fidelity (video more than animation in both cases).

There was a lot of interest in VR, which crossed over between video and games.  Which is interesting because it’s not inherently tied to games or video!  In short, it’s a delivery technology.  You can do branching scenarios, full game engine delivery, or just video in VR. The visuals can be generated as video or from digital models. There was some awareness, e.g. fun was made of the idea of presenting powerpoint in VR (just like 2nd Life ;).

I did an ecosystem presentation that contextualized all three (video, games, mobile) in the bigger picture, and also drew upon their cognitive and then L&D roles. I also deconstructed the game Fluxx (a really fun game with an interesting ‘twist’). Overall, it was a good conference (and nice to be in San Diego, one of my ‘homes’).

23 May 2017

Some new elearning companies ;)

Clark @ 8:03 AM

As I continue to track what’s happening, I get the opportunity to review a wide number of products and services. While tracking them all would be a full-time job, occasionally some offer new ideas.  Here’s a collection of those that have piqued my interest of late:

Sisters eLearning: these folks are taking a kinder, gentler approach to their products and marketing their services.  Their signature offering is a suite of templates for your elearning featuring cooperative play.  Their approach in their custom development is quiet and classy. This is reflected in the way they promote themselves at conferences: they all wear mauve polos and sing beautiful a capella.  Instead of giveaways, they quietly provide free home-baked mini-muffins for all.

Yalms: these folks are offering the ‘post-LMS’. It’s not an LMS, and instead offers course management, hosting, and tracking.  It addresses compliance, and checks a whole suite of boxes such as media portals, social, and many non-LMS things including xAPI. Don’t confuse them with an LMS; they’re beyond that!

MicroBrain: this company has developed a system that makes it easy to take your existing courses and chunk them up into little bits. Then it pushes them out on a schedule. It’s a serendipity model, where there’s a chance it just might be the right bit at the right time, which is certainly better than your existing elearning. Most importantly, it’s mobile!

OffDevPeeps: these folks a full suite of technology development services including mobile, AR, VR, micro, macro, long, short, and anything else you want, all done at a competitive cost. If you are focused on the ‘fast’ and ‘cheap’ side of the trilogy, these are the folks to talk to. Coming soon to an inbox near you!

DanceDanceLearn: provides a completely unique offering. They have developed an authoring tool that makes it easy for you to animate dancers moving in precise formations that spell out content. They also have a synchronized swimming version.  Your content can be even more engaging!

There, I hope you’ll find these of interest, and consider checking them out.

Any relation between the companies portrayed and real entities is purely coincidental.  #couldntstopmyself #allinfun

10 May 2017

Designing Microlearning

Clark @ 8:04 AM

Yesterday, I clarified what I meant about microlearning. Earlier, I wrote about designing microlearning, but what I was really talking about was the design of spaced learning. So how should you design the type of microlearning I really feel is valuable?

To set the stage, here’re we’re talking about layering learning on performance in a context. However, it’s more than just performance support. Performance support would be providing a set of steps (in whatever ways: series of static photos, video, etc) or supporting those steps (checklist, lookup table, etc).  And again, this is a good thing, but microlearning, I contend, is more.

To make it learning, what you really need is to support developing an ability to understand the rationale behind the steps, to support adapting the steps in different situations. Yes, you can do this in performance support as well, but here we’re talking about models

What (causal) models give us is a way to explain what has happened, and predict what will happen.  When we make these available around performing a task, we unpack the rationale. We want to provide an understanding behind the rote steps, to support adaptation of the process in difference situations. We also provide a basis for regenerating missing steps.

Now, we can also be providing examples, e.g. how the model plays out in different contexts. If what the learner is doing now can change under certain circumstances, elaborating how the model guides performing differently in different context provides the ability to transfer that understanding.

The design process, then, would be to identify the model guiding the performance (e..g. why we do things in this order, and it might be an interplay between structural constraints (we have to remove this screw first because…) and causal ones (this is the chemical that catalyzes the process).  We need to identify and determine how to represent.

Once we’ve identified the task, and the associated models, we then need to make these available through the context. And here’s why I’m excited about augmented reality, it’s an obvious way to make the model visible. Quite simply, it can be layered on top of the task itself!   Imagine that the workings behind what you’re doing are available if you want. That you can explore more as you wish, or not, and simply accept the magic ;).

The actual task is the practice, but I’m suggesting providing a model explaining why it’s done this way is the minimum, and providing examples for a representative sample of other appropriate contexts provides support when it’s a richer performance.  Delivered, to be clear, in the context itself. Still, this is what I think really constitutes microlearning.  So what say you?

9 May 2017

Clarifying Microlearning

Clark @ 8:05 AM

I was honored to learn that a respected professor of educational technology liked my definition of micro-learning, such that he presented it as a recent conference.  He asked if I still agreed with it, and I looked back at what I’d written more recently. What I found was that I’d suggested some alternate interpretations, so I thought it worthwhile to be absolutely clear about it.

So, the definition he cited was:

Microlearning is a small, but complete, learning experience, layered on top of the task learners are engaged in, designed to help learners learn how to perform the task.

And I agree with this, with a caveat. In the article, I’d said that it could also be a small complete learning experience, period. My clarification on this is that those are unlikely, and the definition he cited was the most likely, and likely most valuable.

So, I’ve subsequently said (and elaborated on the necessary steps):

What I really think microlearning could and should be is for spaced learning.

Here I’m succumbing to the hype, and trying to put a positive spin on microlearning. Spaced learning is a good thing, it’s just not microlearning. And microlearning really isn’t helping them perform the task in the moment (which is a good thing too), but instead leveraging that moment to also extend their understanding.

No, I like the original definition, where we layer learning on top of a task, leveraging the context and requiring the minimal content to take a task and make it a learning opportunity. That, too, is a good thing. At least I think so. What do you think?

14 March 2017


Clark @ 8:01 AM

There’s been a lot of talk about microlearning of late – definitions, calls for clarity, value propositions, etc – and I have to say that I’m afraid some of it (not what I’ve linked to) is a wee bit facile. Or, at least, conceptually unclear.  And I think that’s a problem. This came up again in a recent conversation, and I had a further thought (which of course I have to blog about ;).  It’s about how to do microdesign, that is, how to design micro learning. And it’s not trivial.

VirusSo one of the common views of micro learning is that it’s just in time. That is, if you need to know how to do something, you look it up.  And that’s just fine (as I’ve recently ranted). But it’s not learning. (In short: it’ll help you in the moment, but unless  you design it to support learning, it’s performance support instead).  You can call it Just In Time support, or microsupport,  but properly, it’s not micro learning.

The other notion is a learning that’s distributed over time. And that’s good.  But this takes a bit more thought. Think about it. If we want to systematically develop somebody over time, it’s not just a steady stream of ‘stuff’.  Ideally, it’s designed to optimally get there, minimizing the time taken on the part of the learner, and yet yield reliable improvements.  And this is complex.

In principle, it should be a steady development, that reactivates and extends learners capabilities in systematic ways. So, you still need your design steps, but you have to think about granularity, forgetting, reactivation, and development in a more fine-grained way.  What’s the minimum launch?  Can you do ought but make sure there’s an initial intro, concept, example, and a first practice?  Then, how much do we need to reactivate versus how much do we have to expand the capability in each iteration? How much is enough?  As Will Thalheimer says in his spaced learning report, the amount and duration of spacing depends on the complexity of the task and the frequency with which it’s performed.

When do you provide more practice, versus another example, versus a different model?  What’s the appropriate gap in complexity?  We’ll likely have to make our best guesses and tune, but we have to think consciously about it.  Just chunking up an existing course into smaller bits isn’t taking the decay of memory over time and the gradual expansion of capability. We have to design an experience!

Microlearning is the right thing to do, given our cognitive architecture. Only so much ‘strengthening’ of the links can happen in any one day, so to develop a full new capability will take time. And that means small bits over time makes sense. But choosing the right bits, the right frequency, the right duration, and the right ramp up in complexity, is non-trivial.  So let’s laud the movement, but not delude ourselves either that performance support or a stream of content is learning. Learning, that is systematically changing the reliable behavior of the most complex thing in the known universe, is inherently complex. We should take it seriously, and we can.

1 February 2017

Other writings

Clark @ 8:04 AM

It occurs to me to mention some of the other places you can find my writings besides here (and how they differ ;).  My blog posts are pretty regular (my aim is 2/week), but tend to have ideas that are embryonic or a bit ‘evangelical’. First, I’ve written four books; you can check them out and get sample chapters at their respective sites:

Engaging Learning: Designing e-Learning Simulation Games

Designing mLearning: Tapping Into the Mobile Revolution for Organizational Performance

The Mobile Academy: mLearning For Higher Education

Revolutionize Learning &  Development: Performance and Information Strategy for the Information Age

They’re designed to be the definitive word on the topic, at least at the moment.

I’ve also written or co-written a number of chapters in a variety of books.  The books include The Really Useful eLearning Instruction ManualCreating a Learning Culture, Michael Allen’s eLearning Annual 2009,  and a bunch of academic handbooks (Mobile Learning, Experiential Learning, Wiley Learning Technology ;).  These tend to be longer than an article, with a pretty thorough coverage of whatever topic is on tap.

Then there are articles in a variety of magazines.  These tend to be aggregated thoughts that are longer than a blog post, but not as through as a chapter. In particular, they are things I think need to be heard (or read).  So, my writing has shown up in:


Learning Solutions


The topics vary. (For the eLearnMag ones, you’ll have to search for my name owing to their interface, and they tend to be more like editorials.)

And then there are blog posts for others that are a bit longer than my usual blog post, and close to an article in focus:

The Deeper eLearning series for Learnnovators

A monthly article for Litmos.

These, too, are more like articles in that they’re focused, and deeper than my usual blog post.  For the latter I cover a lot of different topics, so you’re likely to find something relevant there in many different areas.

I’m proud of it all, but for a quick update on a topic, you might be best seeing if there’s a Litmos post on it first.  That’s likely to be relatively short and focused if there is one. And, of course, if it’s a topic you’re interested in advancing in and I can help, do let me know.

5 January 2017

Mobile Lesson

Clark @ 8:04 AM

Designing mLearning bookI’m preparing my keynote for a mobile conference, and it’s caused an interesting reflection.  My mlearning books came out in 2011, and subsequently I’ve written on the revolution.  And I’ve been speaking on both of late, but in some ways the persistent interest in mobile intrigues me.

While my services are pushing the better design of and the bigger picture of elearning, mobile isn’t going away. My trip to China to keynote this past year was on mlearning (and one the year before), and now again I’m talking on the topic.  What does this mean?

As I wrote before, China is much bigger into mobile than we are. It’s likely because we had more ubiquity of internet access and computers, but they’re also a highly mobile populace.  And it makes sense that they’re showing a continuing interest. In fact, they specifically asked for a presentation that was advanced, not my usual introduction.

I’m also going to be presenting on more advanced thinking to the audience coming up, because the entire focus of the event is mlearning and I infer that they’re already up on the basics.  The focus in my books was to get people thinking differently about mobile (because it’s not about courses on a phone), but certainly that was understood in China. I think it’s also understood by most of the developers. I’m less certain about the elearning field (corporate and education), at least not yet.

In many ways, mobile was a catalyst for the revolution.  I think of mlearning as much more than courses, and my models focused on performance support and social more than formal learning. That is really one of the two-fold focuses on the revolution (the “L&D isn’t doing near what it could and should”; to complement the “and what it is doing, it is doing badly” :).  In that way, these devices can be a wedge in the door for a broader focus.

Yet mobile is just a platform for enabling the type of experiences, the types of cognitive support, as any other platform  from conversation to artificial intelligence.  It is an important one, however, with the unique properties of doing things whenever & wherever you are and doing things because of when and where you are.

So I get that mlearning is of interest because of the ubiquity, but the thinking that goes into mobile really goes beyond mobile.  It’s about aligning with us, supporting our needs to communicate and collaborate.  That’s still a need, a useful message, and an opportunity.  Are you mobilizing?


21 September 2016

Collaborative Modelling in AR (and VR)

Clark @ 8:04 AM

A number of years ago, when we were at the height of the hype about Virtual Worlds (computer rendered 3D social worlds, e.g. Second Life), I was thinking about the affordances.  And one that I thought was intriguing was co-creating, in particular collaboratively creating models that were explanatory and predictive.  And in thinking again about Augmented Reality (AR), I realized we had this opportunity again.

Models are hard enough to capture in 2D, particularly if they’re complex.  Having a 3rd dimension can be valuable. Similarly if we’re trying to match how the components are physically structured (think of a model of a refinery, for instance, or a power plant).  Creating it can be challenging, particularly if you’re trying to map out a new understanding.  And, we know that collaboration is more powerful than solo ideation.  So, a real opportunity is to collaborate to create models.

And in the old Virtual Worlds, a number had ways to create 3D objects.  It wasn’t easy, as you had to learn the interface commands to accomplish this task, but the worlds were configurable (e.g. you could build things) and you could build models.  There was also the overall cognitive and processing overhead inherent to the worlds, but these were a given to use the worlds at all.

What I was thinking of, extending my thoughts about AR in general,  that annotating the world is valuable, but how about collaboratively annotating the world?  If we can provide mechanisms (e.g. gestures) for people to not just consume, but create the models ‘in world’ (e.g. while viewing, not offline), we can find some powerful learning opportunities, both formal and informal.  Yes, there are issues in creating and developing abilities with a standard ‘model-building’ language, particularly if it needs to be aligned to the world, but the outcomes could be powerful.

For formal, imagine asking learners to express their understanding. Many years ago, I was working with Kathy Fisher on semantic networks, where she had learners express their understanding of the digestive system and was able to expose misconceptions.  Imagine asking learners to represent their conceptions of causal and other relationships.  They might even collaborate on doing that. They could also just build 3D models not aligned to the world (though that doesn’t necessarily require AR).

And for informal learning, having team or community members working to collaboratively annotate their environment or represent their understanding could solve problems and advance a community’s practices.  Teams could be creating new products, trouble-shooting, or more, with their models.  And communities could be representing their processes and frameworks.

This wouldn’t necessarily have to happen in the real world if the options weren’t aligned to external context, so perhaps VR could be used. At a client event last week, I was given the chance to use a VR headset (Google Cardboard), and immerse myself in the experience. It might not need to be virtual (instead collaboration could be just through networked computers, but there was data from research into virtual reality that suggests better learning outcomes.

Richer technology and research into cognition starts giving us powerful new ways to augment our intelligence and co-create richer futures.  While in some sense this is an extension of existing practices, it’s leveraging core affordances to meet conceptually valuable needs.  That’s my model, what’s yours?

Next Page »

Powered by WordPress