Clark Quinn's Learnings about Learning
(The Official Quinnovation blog)

13 September 2017

Why AR

Clark @ 8:07 AM

Perhaps inspired by Apple’s focus on Augmented Reality (AR), I thought I’d take a stab at conveying the types of things that could be done to support both learning and performance. I took a sample of some of my photos and marked them up.  I’m sure there’s lots more that could be done (there were some great games), but I’m focusing on simple information that I would like to see. It’s mocked up (so the arrows are hand drawn), so understand I’m talking concept here, not execution!


Here, I’m starting small. This is a photo I took of a flower on a walk. This is the type of information I might want while viewing the flower through the screen (or glasses).  The system could tell me it’s a tree, not a bush, technically (thanks to my flora-wise better half).  It could also illustrate how large it is.  Finally, the view could indicate that what I’m viewing is a magnolia (which I wouldn’t have known), and show me off to the right the flower bud stage.

The point is that we can get information around the particular thing we’re viewing. I might not actually care about the flower bud, so that might be filtered out, and it might instead talk about any medicinal uses.  Also, it could be dynamic, animating the process of going from bud to flower and falling off. It could also talk about the types of animals (bees, hummingbirds, ?) that interact with it, and how. It would be dependent on what want to learn.  And, perhaps, with some additional incidental information on the periphery of my interests, for serendipity.

Neighborhood viewGoing wider, here I’m looking out at a landscape, and the overlay is providing directions. Downtown is straight ahead, my house is over that ridge, and infamous Mt. Diablo is off to the left of the picture. It could do more, pointing out that the green ridges are grapes, provide the name of the neighborhood that’s in the foreground (I call it Stepford Downs, after the movie ;).

Dynamically, of course, if I moved the camera to the left, Mt. Diablo would get identified when it sprung into view.  As we moved around, we’d point to the neighboring towns in view, and in the direction of further towns blocked by mountain ranges.  We should or could also identify the river flowing past to the north.  And we could instead focus on other information: infrastructure (pipes and electricity), government boundaries, whatever’s relevant could be filtered in or out.

Road picAnd in this final example, taken from the car on a trip, AR might indicate some natural features. Here I’ve pointed to the clouds (and indicated the likelihood of rain). Similarly, I’ve identified the rock and the mechanism of shaping. (These are all made up, they could be wrong; Mt Faux definitely is!)  We might even be able to touch on a label and have it expand.

Similarly, as we moved, information would change as we viewed different areas. We might even animate what the area looked like hundreds of thousands of years ago and how it’s changed.  Or we could illustrate coming changes. It could instead show boundaries of counties or parks, types of animals, or other relevant information.

The point here is that annotating the world, a capability AR has, can be an amazing learning tool. If I can specify my interests, we can capitalize on them to develop. And this is as an adult. Think about doing this for kids, layering on information in their Zone of Proximal Development and interests!  I know VR’s cool, and has real learning potential, but there you have to create the context. Here we’re taking advantage of it. That may be harder, but it’s going to have some real upsides when it can be done ubiquitously.

1 August 2017

Realities 360 Reflections

Clark @ 8:08 AM

So, one of the two things I did last week was attend the eLearning Guild‘s Realities 36o conference.  Ostensibly about Augmented Reality (AR) and Virtual Reality (VR), it ended up being much more about VR. Which isn’t a bad thing, it’s probably as much a comment on the state of the industry as anything.  However, there were some interesting learnings for me, and I thought I’d share them.

First, I had a very strong visceral exposure to VR. While I’ve played with Cardboard on the iPhone (you can find a collection of resources for Cardboard here), it’s not quite the same as a full VR experience.  The conference provided a chance to try out apps for the HTC Vive, Sony Playstation VR, and the Oculus.  On the Vive, I tried a game where you shot arrows at attackers.  It was quite fun, but mostly developed some motor skills. On the Oculus, I flew an XWing fighter through an asteroid field and escorted a ship and shoot enemy Tie-fighters.  Again, fun, but mostly about training my motor skills in this environment.

It was the one I think on the Vive that gave me an experience.  In it, you’re floating around the International Space Station. And it was very cool to see the station and experience the immersion of 3D, but it was very uncomfortable.  Partly because I was trying to fly around (instead of using handholds), my viewpoint would fly through the bulkhead doors. However, the positioning meant that it gave the visual clues that my chest was going through the metal edge.  This was extremely disturbing to me!  As I couldn’t control it well, I was doing this continually, and I didn’t like it. Partly it was the control, but it was also the total immersion. And that was impressive!

There are empirical results that demonstrate better learning outcomes for VR, and certainly  I can see that particularly, for tasks inherently 3D. There’s also another key result, as was highlighted in the first keynote: that VR is an ’empathy’ machine. There have been uses for things like understanding the world according to a schizophrenic, and a credit card call center helping employees understand the lives of card-users.

On principle, such environs should support near transfer when designed to closely mimic the actual performance environment. (Think: flight or medicine simulators.)  And the tools are getting better. There’s an app that allows you to take photos of a place to put into Cardboard, and game engines (Unity or Unreal or both) will now let you import AutoCAD models.  There was also a special camera that could sense the distances in a space and automatically generate a model of it.  The point being that it’s getting easier and easier to generate VR environments.

That, I think, is what’s holding AR back.  You can fairly easily use it for marker or location based information, but actually annotating the world visually is still challenging.  I still think AR is of more interest, (maybe just to me), because I see it eventually creating the possibility to see the causes and factors behind the world, and allow us to understand it better.  I could argue that VR is just extending sims from flat screen to surround, but then I think about the space station, and…I’m still pondering that. Is it revolutionary or just evolutionary?

One session talked about trying to help folks figure out when VR and AR made sense, and this intrigued me. It reminded me that I had tried to characterize the affordances of virtual worlds, and I reckon it’s time to take a stab at doing this for VR and AR.  I believed then that I was able to predict when virtual worlds would continue to find value, and I think results have borne that out.  So, the intent is to try to get on top of when VR and AR make sense.  Stay tuned!

22 November 2016

Thoughts on story, games, and VR

Clark @ 8:09 AM

story games VRAs luck would have it, I found out about an event on Storytelling Across Media being run in the city, and attended a couple of the panels: half of one on interactive design and Telltale Games, one on story and games, and one on story and VR.  There were interesting quotes from each about story, games, and VR that prompted reflection, and I thought I’d share my thoughts with  you.

Story and Games

The first quote that struck home was “nonlinear storytelling strikes a balance between narrative and choice”. This is the challenge that I and I think all game designers struggle with. So, I subsequently asked “How do you integrate storytelling with experience design?”  The panelists acknowledged that this was the ongoing challenge. Another comment was that “stories are created in your imagination”.  That’s key, I think, to create experiences that the player will end up writing as a story they can tell.

I found myself  thinking about story machines versus experience engines.  It appears to me that, ala Sid Maier’s “a good game is a series of interesting decisions”, that it’s all about the decisions you make.  It’s easier to tell a good story when you put a game ‘on rails’; it’s harder when you want to have an open world and still ramp up the tension across the board.  Having rules and timers give you the opportunity.  For serious games, however, not commercial ones, I reckon it’s more ok for the story to be somewhat linear.

Another interesting comment was about how things are going transmedia.  An issue that emerged was the business of transmedia, how you might start with a comic to build interest and revenue to fund adding in a game, or a movie.  Telling stories across media is an interesting challenge, and could have real opportunity for learning. I have been a fan of Andrea Phillips Transmedia Storytelling and Koreen Pagano’s Immersive Learning, which I think give good clues about how this might go.  I’m also thinking about the movie The Game (Michael Douglas & Sean Penn), and how it’s a great example of an alternate reality game. I’d love to do something like that, but serious. We did a demo once about sales that captured some of the opportunity, but…

Also, I’ve looked at many instances of experience design: movies, theatre, amusement parks, games, etc.  And I’ve advocated that those interested in making experience engaging, particularly learning, should similarly explore this. It’s hard work, you know ;).  However, one of the panelists commented on ‘circus design’. That’s something I had never thought to explore, so it’s now on my ‘todo’ list!

There were also several mentions of a theatre experience in New York called Sleep No More.  It involves two intersecting stories: Macbeth and a lady looking for someone. There’s no dialog, and it plays out across several venues. The interesting thing is that you, as an audience member, choose where to go, who to follow, and what to watch.  Now I need to find a way to experience this! (Wish I’d heard about it before my keynote there in June.)


The other theme was VR, and there were some very interesting comments made. It was repeatedly made clear by the practitioners that this was a field  still very much in development. The tools and technologies had become good and cheap enough to allow tinkering and exploration, but the business models and viable experiences were still being explored.

One quote that was interesting was a response to the issue of what the ‘frame is’.  In computer games, the frame is the screen. But in VR there’s no ‘screen’, you’re surrounded.  A response to this was “the player is the ‘frame’ in VR”.  That’s an interesting perspective.  I might reframe it as “the player’s attention is the ‘frame’ in the game”, and manipulating that may be the key.  To ponder.

Another interesting comment was “proximity breeds empathy”.  I was reminded of the phrase “familiarity breeds contempt”, but I can see that an experiential approach may help generate sympathy and comprehension.  Can you actually share someone else’s experience?  Certainly, immersion has yielded concrete learning improvements, and successful behavioral interventions.

Which brings up a response on the question of where the future of VR is (that seemed to be reflected by the other panelists) is that shared VR is the future.  Clearly, social has big benefits for learning, and can be the basis of strong emotions (sometimes negative!)

There are clearly times when VR has unique and valuable advantages for learning, though I continue to think that AR may provide the greater overall opportunity, when it’s done right.  It might be like the difference between courses and mentoring.  That is, VR to make a step change, and the AR for continual development.  Where do ARGs fit in?  Perhaps more for developing the ability to deal with the unexpected?

One of the panelists mentioned Magic Leap, and I was reminded that that type of experience will be where we can really get opportunities for transformative experiences. I think that’s where Google Glass was going, and they’re right to hold off and get it right, but when we can really start annotating the world, combining it with ARGs, there will be real potential.  We can start designing now, but it’ll definitely be some time before tools and technologies hit the ‘experimentation’ phase VR has reached.

Lots of fodder for thinking!

16 November 2016

Maxwell Planck #DevLearn Keynote Mindmap

Clark @ 5:01 PM

Maxwell Planck gave the afternoon keynote for the opening day of DevLearn. He talked about the trajectory of VR, with very interesting reflections on creativity, story, and meaning.

maxwell planck keynote mindmap

21 September 2016

Collaborative Modelling in AR (and VR)

Clark @ 8:04 AM

A number of years ago, when we were at the height of the hype about Virtual Worlds (computer rendered 3D social worlds, e.g. Second Life), I was thinking about the affordances.  And one that I thought was intriguing was co-creating, in particular collaboratively creating models that were explanatory and predictive.  And in thinking again about Augmented Reality (AR), I realized we had this opportunity again.

Models are hard enough to capture in 2D, particularly if they’re complex.  Having a 3rd dimension can be valuable. Similarly if we’re trying to match how the components are physically structured (think of a model of a refinery, for instance, or a power plant).  Creating it can be challenging, particularly if you’re trying to map out a new understanding.  And, we know that collaboration is more powerful than solo ideation.  So, a real opportunity is to collaborate to create models.

And in the old Virtual Worlds, a number had ways to create 3D objects.  It wasn’t easy, as you had to learn the interface commands to accomplish this task, but the worlds were configurable (e.g. you could build things) and you could build models.  There was also the overall cognitive and processing overhead inherent to the worlds, but these were a given to use the worlds at all.

What I was thinking of, extending my thoughts about AR in general,  that annotating the world is valuable, but how about collaboratively annotating the world?  If we can provide mechanisms (e.g. gestures) for people to not just consume, but create the models ‘in world’ (e.g. while viewing, not offline), we can find some powerful learning opportunities, both formal and informal.  Yes, there are issues in creating and developing abilities with a standard ‘model-building’ language, particularly if it needs to be aligned to the world, but the outcomes could be powerful.

For formal, imagine asking learners to express their understanding. Many years ago, I was working with Kathy Fisher on semantic networks, where she had learners express their understanding of the digestive system and was able to expose misconceptions.  Imagine asking learners to represent their conceptions of causal and other relationships.  They might even collaborate on doing that. They could also just build 3D models not aligned to the world (though that doesn’t necessarily require AR).

And for informal learning, having team or community members working to collaboratively annotate their environment or represent their understanding could solve problems and advance a community’s practices.  Teams could be creating new products, trouble-shooting, or more, with their models.  And communities could be representing their processes and frameworks.

This wouldn’t necessarily have to happen in the real world if the options weren’t aligned to external context, so perhaps VR could be used. At a client event last week, I was given the chance to use a VR headset (Google Cardboard), and immerse myself in the experience. It might not need to be virtual (instead collaboration could be just through networked computers, but there was data from research into virtual reality that suggests better learning outcomes.

Richer technology and research into cognition starts giving us powerful new ways to augment our intelligence and co-create richer futures.  While in some sense this is an extension of existing practices, it’s leveraging core affordances to meet conceptually valuable needs.  That’s my model, what’s yours?

13 September 2016

Augmenting AR for Learning

Clark @ 8:01 AM

We’re hearing more and more about AR (Augmented Reality), and one of it’s core elements is layering information on top of the world.  But in a conversation the other night, it occurred to me that we could push that information to be even more proactive in facilitating learning. And this comes from the use of models.

The key idea I want to leverage is the use of models to foster is the use of models to predict or explain what happens in the world. As I have argued, models are useful to guide our performance, and in fact I suggest that they’re the best basis to give people the ability to act, and adapt, in a changing world.  So the ability to develop the ability to use them is, I would suggest, valuable.

Now, with AR, we can annotate the world with models.  We can layer on the conceptual relationships that underpin the things we can observe, so showing flow, causation, forces, constraints, and more.  We can illustrate tectonic forces, represent socio-economic data, physical properties, and so on.  The question is, can we not just illuminate them, but can we ‘exercise’ them. ?

Imagine that when we presented this information, we asked the learner to make an inference based upon the displayed model.  So, for instance, we might ask them, presented with a hypothetical or historical situation to accompany the model, to explain why it would have occurred. Similarly, we could ask them to predict, based upon the model, the outcome of some perturbation.

In short, we’re not only presenting the underlying relationship, but asking them to use it in a particular context.  This is what meaningful practice is all about, and we can use the additional information from the AR overlay as scaffolding to support acquiring not just information but the ability to use it.

Now, motivated and effective self-learners wouldn’t need this additional level of support, but there are plausible situations where it would make sense.  Another extension would be to ask learners to create a particular change of state (as long as the consequences are controllable).  While the addition of information in the world can be helpful, developing that understanding through action could be even more powerful.  That’s where my thinking was going, anyway, where does this lead you?

7 September 2010

Brainstorming, Cognition, #lrnchat, and Innovative Thinking

Clark @ 6:05 AM

Two recent events converged to spark some new thinking.

First, I had the pleasure of meeting up with Dave Gray, who I’d first met in Abu Dhabi where we both were presenting at a conference. Dave’s an interesting guy; he started XPlane as a firm to deliver meaningful graphics (which was recently bought by Dachis Group, and he’s recently been lead author on the book Gamestorming.

What Gamestorming is, I found out, is a really nice way to frame some common activities that help facilitate creative thinking.  Dave’s all over creativity, and took the intersection of game rules and structured activities to facilitate innovative thinking, and came up with a model that guides thinking about social interaction to optimize useful outcomes.  The approach incorporates, on a quick survey, a lot of techniques to overcome our cognitive limitations. I really like his approach to provide an underlying rationale about why activities that follow the structure implicitly address our cognitive limitations and are highly effective at getting individuals to contribute to some emergent outcomes.

I also happened to have a conversation with a lady who has been creating some local salons, particular get-togethers that have a structured approach to interaction (I’ve attended another such).  Hers was based upon biasing the conversation to the creative side, a very intriguing approach. Not only was she thinking of leveraging this for tech topics, but she was also thinking about leveraging new technologies, e.g., a Second Life Salon.

Which got me thinking that there were some relationships between Dave’s Gamestorming approach and the salons . I wouldn’t be surprised to find salons in Dave’s book!  Moreover, however, was that there are intriguing potentials from tapping into virtual worlds to remove the geographic constraints on such social interactions.

What was also interesting to me, reflecting on an early experience with the Active Worlds virtual world, your attention eventually focused on the chat stream, because that’s where all meaningful interaction really happened.  Which is really what #lrnchat is, a chat.   One of the nice properties of a chat is that you’re not limited to turn-taking.  A problem in the real world is that the more people you add, the less time each gets to contribute in a conversation. In a simultaneous medium like #lrnchat, everyone can contribute as fast as they can, and the only limitations are on the participants ability to process the stream and contribute (which are, admittedly, finite).  Still, it’s a richer medium for contribution, as I find I can process more chats in the same time only one person would talk (of course, the 140 char limit helps too).

The important thing to me is that social media have new capabilities to enable contribution, and achieve the innovation end that Dave’s excited about in ways that maximize the outcomes based upon new technology affordances that we are just beginning to appreciate.  Can we do better than we’ve done in the past, leveraging new technologies?  I think Dave’s model can serve for virtual as well as real events, and we may be able to improve upon the activities with some technology capabilities.  To do so, however, means we really have to look at our capabilities in conjunction with new technologies.  Yeah, I think we can have some fun with that ;).

5 May 2010

May Big Q: Workplace Learning Technology 2015

Clark @ 10:23 AM

The Learning Circuits Blog Big Question of the Month for May is “What will workplace learning technology look like in 2015?”  This is a tough question for me, because I tend to see what could be the workplace tech if we really took advantage of the opportunities. Consequently, my predictions tend to be optimistic, as the real world has a way of not moving near as fast as one could wish.  Still, I actually prefer to think on what could be the possibilities, as it’s more inspiring.  Maybe I’ll answer both.

The opportunities on the table are immense.  Mobile technologies are taking off, we’re getting real power in technology standards (and still some hiccups), and we’re crossing boundaries between reality and virtual worlds.

Smartphones are on the rise, and new portable devices (e.g. tablets) are expanding the possibilities.  It’s highly plausible that we’ll have expanded the performance ecosystem to be location independent, and be providing the 4C’s in ways that allow powerful access, sharing, and collaboration.

Virtual worlds provide a different approach, where instead of augmenting reality, we’re re-contextualized in an artificial but enhanced space where capabilities that don’t exist in the real world are available to us.  We can build 3D models, communicate in micro or macro spaces (within molecules or between galaxies), and open up the hidden components of real spaces.  Again, we can leverage the 4C’s to go beyond courses to a fuller definition of learning.

This can be facilitated by standards.  If HTML 5 coalesces as it should, we can and should be delivering rich interactivity, not just content delivery.  Similarly, if we can move beyond ebook standards to capture interactivity, we can make easy marketplaces to deliver capability that is available regardless of connectivity. Virtual world standards are emerging too, and hopefully some convergence will have happened by 2015!

Also, if our backend systems progress as they can (and should), we should be able to move to Web 3.0 where instead of producers or users, the systems generate content.  We can use semantic technologies to do customized delivery of information, pulling together what we know about the learner (e.g. from a competency map or learning path), about the content available (from a content model), and their tasks (from a job role) and their current context (their location and what’s on their calendar) to serve up just the right information.

This is all possible.  What’s probable?  We’ll have seen major progress in mobile tools, whether companies wake up or it’s just individual initiative to accessorize the brain.  Virtual worlds will also be more prevalent, though not ubiquitous.  Social media systems will be much more integrated into the workflow, and LMS will have become just a cog in the ecosystem, not the ecosystem. The social media will be available whether you’re in-world, in the world, or at your desk.

Semantics, however, are likely to still be nebulous. People are beginning to take advantage of powerful content systems leveraging tagging and flexible delivery, but it’s still embryonic.  There’ll be more pockets, but it won’t be a groundswell yet.

I’m probably still be optimistic, but a guy can hope, and of course strive to make it so.  This is what I do and where I like to play. I welcome more playmates in this great playground of opportunity.

13 January 2010

Kapp & Driscoll nail Learning in 3D

Clark @ 5:02 AM

Karl Kapp and Tony O’Driscoll have launched the age of virtual worlds in organizational learning by providing a thorough overview in their new book Learning in 3D. This is a comprehensive and eloquent book, covering the emerging opportunity in virtual worlds.  Replete with conceptual models to provide structure to the discussion as well as pragmatic guidance to how to design and implement learning solutions, this book will help those trying to both get their minds around the possibilities and those who are ready to get their hands dirty.

Learning in 3D Blog Stop badgeTheir enthusiasm for the opportunities is palpable, and helps bolster the reader through some initial heady material. The book is eloquently written, as you’d expect from two academics, but both also play in the real world, so it’s not too esoteric in language or concept.  It’s just that the concepts are complex, and they don’t pander with overly simplistic presentations. They get it, and want you to, too.

Their opening chapters make a solid argument for social learning.  They take us through the changes society is going through and the technology transformations of the internet to help us understand why social learning, formal and informal, is a powerful case.  They point out the problems with existing formal learning, and identify how these can be addressed in virtual worlds.

What follows is a serious statement of the essential components of a virtual world for organizational learning, a series of models that attempt to capture and categorize learning in a 3D world.  They similarly develop a series of useful ‘use cases’ (they term them “archetypes”), and place them in context.  Overall, it’s a well thought out characterization of the space.

Coupled with the conceptual overviews are pragmatic support.  There are a number of carefully detailed examples that help learners understand the business need and the outcomes as well as the design.  There are war stories from a number of pioneers in the space.  There is a systematic guide to design that should provide valuable support to readers who are eager to experiment, and the advice on vendors, adoption, and implementation is very practical and valuable.

The book is not without flaws: they set up a ‘straw man’ contrast to virtual world learning.  While all too representative of corporate elearning, the contrast of good pedagogy versus bad pedagogy undermines the unique affordances of the virtual world.    I note that their principles for virtual world learning design are not unique to virtual worlds, and are essentially no different (except socially) from those in Engaging Learning.   And their 7 sensibilities doesn’t seem quite as conceptually accurate as my own take on virtual world affordances.  But these are small concerns in the larger picture of communicating the opportunities.

This is a valuable book for those who want to understand what all the excitement is about in virtual worlds.  I’ve been watching the space for a number of years now, and as the technology has matured have moved from thinking that the overhead was too high to where I believe that it is a valuable tool in the learning arsenal and only going to be more so. This book is the guide you need to being ready to capitalize on this opportunity.  You can get a 20% discount purchasing it directly from Amazon.  Recommended.

5 January 2010

Predictions for 2010

Clark @ 7:04 AM

eLearning Mag publishes short predictions for the year from a variety of elearning folks, and I thought I’d share and elaborate on what I put in:

I’m hoping this will be the ‘year of the breakthrough’.  Several technologies are poised to cross the chasm: social tools, mobile technologies, and virtual worlds.  Each has reached critical mass in being realistically deployable, and offers real benefits.  And each complements a desired organizational breakthrough, recognizing the broader role of learning not just in execution, but in problem-solving, innovation, and more.  I expect to see more inspired uses of technology to break out of the ‘course’ mentality and start facilitating performance more broadly, as organizational structures move learning from ‘nice to have’ to core infrastructure.

While I don’t know that these technologies will actually cross over (I’m notoriously optimistic), they’re pretty much ready to be:

  • Social I’ve mentioned plenty before, and everyone and their brother is either adding social learning capabilities to their suites, or creating a social learning tool company. And there are lots of open source solutions.
  • Mobile has similarly really hit the mainstream, with both reasonable and cheap (read: free) ways to develop mobile apps (cf Richard Clark & my presentation at the last DevLearn), and a wide variety of opportunities. The devices are out there!
  • Virtual worlds are a little bit more still in flux (while Linden Labs’ Second Life is going corporate as well, some of the other corporate-focused players are in some upheaval), but the value proposition is clear, and there are still plenty of opportunities.  The barriers are coming down rapidly.

Each has available technologies, best principles established and emerging, and real successes.  Given that there will be books on each coming this year (including mine ;), I really do think the time is nigh.  And, each is a component of a broader approach to learning, one that I’ve been advocating for organizations.

I’m hoping that organizations will start taking a more serious approach to a broad picture of learning.  The need in organizations is for learning to not be an add-on, isolated,  but instead to be part of the infrastructure.  We are at at a stage now where learning has to go faster than taking away, defining, designing, developing, and then delivering can accommodate.  The need is for learning to break out of the ‘event’ model, and start becoming more timely, more context-sensitive, and more collaborative.  Organizations will need their people to produce new answers on a continual basis.

I’m hoping that organizations will ‘get’ the necessary transition, and take the necessary steps.  As Alan Kay said, “the best way to predict the future is to invent it”.  I’m hoping we can invent the future, together.  We need the breakthrough, so let’s get going!

Next Page »

Powered by WordPress