Maxwell Planck gave the afternoon keynote for the opening day of DevLearn. He talked about the trajectory of VR, with very interesting reflections on creativity, story, and meaning.
A number of years ago, when we were at the height of the hype about Virtual Worlds (computer rendered 3D social worlds, e.g. Second Life), I was thinking about the affordances. And one that I thought was intriguing was co-creating, in particular collaboratively creating models that were explanatory and predictive. And in thinking again about Augmented Reality (AR), I realized we had this opportunity again.
Models are hard enough to capture in 2D, particularly if they’re complex. Having a 3rd dimension can be valuable. Similarly if we’re trying to match how the components are physically structured (think of a model of a refinery, for instance, or a power plant). Creating it can be challenging, particularly if you’re trying to map out a new understanding. And, we know that collaboration is more powerful than solo ideation. So, a real opportunity is to collaborate to create models.
And in the old Virtual Worlds, a number had ways to create 3D objects. It wasn’t easy, as you had to learn the interface commands to accomplish this task, but the worlds were configurable (e.g. you could build things) and you could build models. There was also the overall cognitive and processing overhead inherent to the worlds, but these were a given to use the worlds at all.
What I was thinking of, extending my thoughts about AR in general, that annotating the world is valuable, but how about collaboratively annotating the world? If we can provide mechanisms (e.g. gestures) for people to not just consume, but create the models ‘in world’ (e.g. while viewing, not offline), we can find some powerful learning opportunities, both formal and informal. Yes, there are issues in creating and developing abilities with a standard ‘model-building’ language, particularly if it needs to be aligned to the world, but the outcomes could be powerful.
For formal, imagine asking learners to express their understanding. Many years ago, I was working with Kathy Fisher on semantic networks, where she had learners express their understanding of the digestive system and was able to expose misconceptions. Imagine asking learners to represent their conceptions of causal and other relationships. They might even collaborate on doing that. They could also just build 3D models not aligned to the world (though that doesn’t necessarily require AR).
And for informal learning, having team or community members working to collaboratively annotate their environment or represent their understanding could solve problems and advance a community’s practices. Teams could be creating new products, trouble-shooting, or more, with their models. And communities could be representing their processes and frameworks.
This wouldn’t necessarily have to happen in the real world if the options weren’t aligned to external context, so perhaps VR could be used. At a client event last week, I was given the chance to use a VR headset (Google Cardboard), and immerse myself in the experience. It might not need to be virtual (instead collaboration could be just through networked computers, but there was data from research into virtual reality that suggests better learning outcomes.
Richer technology and research into cognition starts giving us powerful new ways to augment our intelligence and co-create richer futures. While in some sense this is an extension of existing practices, it’s leveraging core affordances to meet conceptually valuable needs. That’s my model, what’s yours?
We’re hearing more and more about AR (Augmented Reality), and one of it’s core elements is layering information on top of the world. But in a conversation the other night, it occurred to me that we could push that information to be even more proactive in facilitating learning. And this comes from the use of models.
The key idea I want to leverage is the use of models to foster is the use of models to predict or explain what happens in the world. As I have argued, models are useful to guide our performance, and in fact I suggest that they’re the best basis to give people the ability to act, and adapt, in a changing world. So the ability to develop the ability to use them is, I would suggest, valuable.
Now, with AR, we can annotate the world with models. We can layer on the conceptual relationships that underpin the things we can observe, so showing flow, causation, forces, constraints, and more. We can illustrate tectonic forces, represent socio-economic data, physical properties, and so on. The question is, can we not just illuminate them, but can we ‘exercise’ them. ?
Imagine that when we presented this information, we asked the learner to make an inference based upon the displayed model. So, for instance, we might ask them, presented with a hypothetical or historical situation to accompany the model, to explain why it would have occurred. Similarly, we could ask them to predict, based upon the model, the outcome of some perturbation.
In short, we’re not only presenting the underlying relationship, but asking them to use it in a particular context. This is what meaningful practice is all about, and we can use the additional information from the AR overlay as scaffolding to support acquiring not just information but the ability to use it.
Now, motivated and effective self-learners wouldn’t need this additional level of support, but there are plausible situations where it would make sense. Another extension would be to ask learners to create a particular change of state (as long as the consequences are controllable). While the addition of information in the world can be helpful, developing that understanding through action could be even more powerful. That’s where my thinking was going, anyway, where does this lead you?
Two recent events converged to spark some new thinking.
First, I had the pleasure of meeting up with Dave Gray, who I’d first met in Abu Dhabi where we both were presenting at a conference. Dave’s an interesting guy; he started XPlane as a firm to deliver meaningful graphics (which was recently bought by Dachis Group, and he’s recently been lead author on the book Gamestorming.
What Gamestorming is, I found out, is a really nice way to frame some common activities that help facilitate creative thinking. Dave’s all over creativity, and took the intersection of game rules and structured activities to facilitate innovative thinking, and came up with a model that guides thinking about social interaction to optimize useful outcomes. The approach incorporates, on a quick survey, a lot of techniques to overcome our cognitive limitations. I really like his approach to provide an underlying rationale about why activities that follow the structure implicitly address our cognitive limitations and are highly effective at getting individuals to contribute to some emergent outcomes.
I also happened to have a conversation with a lady who has been creating some local salons, particular get-togethers that have a structured approach to interaction (I’ve attended another such). Hers was based upon biasing the conversation to the creative side, a very intriguing approach. Not only was she thinking of leveraging this for tech topics, but she was also thinking about leveraging new technologies, e.g., a Second Life Salon.
Which got me thinking that there were some relationships between Dave’s Gamestorming approach and the salons . I wouldn’t be surprised to find salons in Dave’s book! Moreover, however, was that there are intriguing potentials from tapping into virtual worlds to remove the geographic constraints on such social interactions.
What was also interesting to me, reflecting on an early experience with the Active Worlds virtual world, your attention eventually focused on the chat stream, because that’s where all meaningful interaction really happened. Which is really what #lrnchat is, a chat. One of the nice properties of a chat is that you’re not limited to turn-taking. A problem in the real world is that the more people you add, the less time each gets to contribute in a conversation. In a simultaneous medium like #lrnchat, everyone can contribute as fast as they can, and the only limitations are on the participants ability to process the stream and contribute (which are, admittedly, finite). Still, it’s a richer medium for contribution, as I find I can process more chats in the same time only one person would talk (of course, the 140 char limit helps too).
The important thing to me is that social media have new capabilities to enable contribution, and achieve the innovation end that Dave’s excited about in ways that maximize the outcomes based upon new technology affordances that we are just beginning to appreciate. Can we do better than we’ve done in the past, leveraging new technologies? I think Dave’s model can serve for virtual as well as real events, and we may be able to improve upon the activities with some technology capabilities. To do so, however, means we really have to look at our capabilities in conjunction with new technologies. Yeah, I think we can have some fun with that ;).
The Learning Circuits Blog Big Question of the Month for May is “What will workplace learning technology look like in 2015?” This is a tough question for me, because I tend to see what could be the workplace tech if we really took advantage of the opportunities. Consequently, my predictions tend to be optimistic, as the real world has a way of not moving near as fast as one could wish. Still, I actually prefer to think on what could be the possibilities, as it’s more inspiring. Maybe I’ll answer both.
The opportunities on the table are immense. Mobile technologies are taking off, we’re getting real power in technology standards (and still some hiccups), and we’re crossing boundaries between reality and virtual worlds.
Smartphones are on the rise, and new portable devices (e.g. tablets) are expanding the possibilities. It’s highly plausible that we’ll have expanded the performance ecosystem to be location independent, and be providing the 4C’s in ways that allow powerful access, sharing, and collaboration.
Virtual worlds provide a different approach, where instead of augmenting reality, we’re re-contextualized in an artificial but enhanced space where capabilities that don’t exist in the real world are available to us. We can build 3D models, communicate in micro or macro spaces (within molecules or between galaxies), and open up the hidden components of real spaces. Again, we can leverage the 4C’s to go beyond courses to a fuller definition of learning.
This can be facilitated by standards. If HTML 5 coalesces as it should, we can and should be delivering rich interactivity, not just content delivery. Similarly, if we can move beyond ebook standards to capture interactivity, we can make easy marketplaces to deliver capability that is available regardless of connectivity. Virtual world standards are emerging too, and hopefully some convergence will have happened by 2015!
Also, if our backend systems progress as they can (and should), we should be able to move to Web 3.0 where instead of producers or users, the systems generate content. We can use semantic technologies to do customized delivery of information, pulling together what we know about the learner (e.g. from a competency map or learning path), about the content available (from a content model), and their tasks (from a job role) and their current context (their location and what’s on their calendar) to serve up just the right information.
This is all possible. What’s probable? We’ll have seen major progress in mobile tools, whether companies wake up or it’s just individual initiative to accessorize the brain. Virtual worlds will also be more prevalent, though not ubiquitous. Social media systems will be much more integrated into the workflow, and LMS will have become just a cog in the ecosystem, not the ecosystem. The social media will be available whether you’re in-world, in the world, or at your desk.
Semantics, however, are likely to still be nebulous. People are beginning to take advantage of powerful content systems leveraging tagging and flexible delivery, but it’s still embryonic. There’ll be more pockets, but it won’t be a groundswell yet.
I’m probably still be optimistic, but a guy can hope, and of course strive to make it so. This is what I do and where I like to play. I welcome more playmates in this great playground of opportunity.
Karl Kapp and Tony O’Driscoll have launched the age of virtual worlds in organizational learning by providing a thorough overview in their new book Learning in 3D. This is a comprehensive and eloquent book, covering the emerging opportunity in virtual worlds. Replete with conceptual models to provide structure to the discussion as well as pragmatic guidance to how to design and implement learning solutions, this book will help those trying to both get their minds around the possibilities and those who are ready to get their hands dirty.
Their enthusiasm for the opportunities is palpable, and helps bolster the reader through some initial heady material. The book is eloquently written, as you’d expect from two academics, but both also play in the real world, so it’s not too esoteric in language or concept. It’s just that the concepts are complex, and they don’t pander with overly simplistic presentations. They get it, and want you to, too.
Their opening chapters make a solid argument for social learning. They take us through the changes society is going through and the technology transformations of the internet to help us understand why social learning, formal and informal, is a powerful case. They point out the problems with existing formal learning, and identify how these can be addressed in virtual worlds.
What follows is a serious statement of the essential components of a virtual world for organizational learning, a series of models that attempt to capture and categorize learning in a 3D world. They similarly develop a series of useful ‘use cases’ (they term them “archetypes”), and place them in context. Overall, it’s a well thought out characterization of the space.
Coupled with the conceptual overviews are pragmatic support. There are a number of carefully detailed examples that help learners understand the business need and the outcomes as well as the design. There are war stories from a number of pioneers in the space. There is a systematic guide to design that should provide valuable support to readers who are eager to experiment, and the advice on vendors, adoption, and implementation is very practical and valuable.
The book is not without flaws: they set up a ‘straw man’ contrast to virtual world learning. While all too representative of corporate elearning, the contrast of good pedagogy versus bad pedagogy undermines the unique affordances of the virtual world. I note that their principles for virtual world learning design are not unique to virtual worlds, and are essentially no different (except socially) from those in Engaging Learning. And their 7 sensibilities doesn’t seem quite as conceptually accurate as my own take on virtual world affordances. But these are small concerns in the larger picture of communicating the opportunities.
This is a valuable book for those who want to understand what all the excitement is about in virtual worlds. I’ve been watching the space for a number of years now, and as the technology has matured have moved from thinking that the overhead was too high to where I believe that it is a valuable tool in the learning arsenal and only going to be more so. This book is the guide you need to being ready to capitalize on this opportunity. You can get a 20% discount purchasing it directly from Amazon. Recommended.
eLearning Mag publishes short predictions for the year from a variety of elearning folks, and I thought I’d share and elaborate on what I put in:
I’m hoping this will be the ‘year of the breakthrough’. Several technologies are poised to cross the chasm: social tools, mobile technologies, and virtual worlds. Each has reached critical mass in being realistically deployable, and offers real benefits. And each complements a desired organizational breakthrough, recognizing the broader role of learning not just in execution, but in problem-solving, innovation, and more. I expect to see more inspired uses of technology to break out of the ‘course’ mentality and start facilitating performance more broadly, as organizational structures move learning from ‘nice to have’ to core infrastructure.
While I don’t know that these technologies will actually cross over (I’m notoriously optimistic), they’re pretty much ready to be:
- Social I’ve mentioned plenty before, and everyone and their brother is either adding social learning capabilities to their suites, or creating a social learning tool company. And there are lots of open source solutions.
- Mobile has similarly really hit the mainstream, with both reasonable and cheap (read: free) ways to develop mobile apps (cf Richard Clark & my presentation at the last DevLearn), and a wide variety of opportunities. The devices are out there!
- Virtual worlds are a little bit more still in flux (while Linden Labs’ Second Life is going corporate as well, some of the other corporate-focused players are in some upheaval), but the value proposition is clear, and there are still plenty of opportunities. The barriers are coming down rapidly.
Each has available technologies, best principles established and emerging, and real successes. Given that there will be books on each coming this year (including mine ;), I really do think the time is nigh. And, each is a component of a broader approach to learning, one that I’ve been advocating for organizations.
I’m hoping that organizations will start taking a more serious approach to a broad picture of learning. The need in organizations is for learning to not be an add-on, isolated, but instead to be part of the infrastructure. We are at at a stage now where learning has to go faster than taking away, defining, designing, developing, and then delivering can accommodate. The need is for learning to break out of the ‘event’ model, and start becoming more timely, more context-sensitive, and more collaborative. Organizations will need their people to produce new answers on a continual basis.
I’m hoping that organizations will ‘get’ the necessary transition, and take the necessary steps. As Alan Kay said, “the best way to predict the future is to invent it”. I’m hoping we can invent the future, together. We need the breakthrough, so let’s get going!
In prepping for tomorrow nights #lrnchat, Marcia Conner was asking about the value proposition of virtual worlds. I ripped out a screed and lobbed it, but thought I’d share it here as well:
At core, I believe the essential affordances of the virtual world are 3D/spatial, and social. There are lower-overhead social environments (but…which I’ll get back to). However, many of our more challenging tasks are 3D visualization (e.g. work of Liz Tancred in medicine, Hollan & Hutchins on steamships). Also, contextualization can be really critical, and immersion may be better. So, for formal learning in particular domains, virtual environments really make a lot of sense. Now you still might not need a social one, so let’s get back to that.
The overhead is high with virtual worlds on the social issue, so ordinarily I’d not put much weight on value proposition for informal learning, but… two things are swaying me. One is the ability to represent yourself as you’d like to be perceived, not as nature has provided. The other is the ephemeral ‘presence’ and the context. Can we make a more ambient environment to meet virtually, and be fully present (in a sense). Somehow there’s less intermediation through a virtual world than through a social networking site (with practice).
And one more thing in the informal side: collaborative 3D creation. This is, to me, the real untapped opportunity, but it may require both better interfaces, and more people with more experience.
Now, there’s certainly a business case for learning in virtual worlds *where* there’s an environment that really needs 3D or contextualization, but does it need to be massively social (versus a constrained environment just for education, built in something like ThinkingWorlds)?
And we know there’s a business case for social, but is the overhead of virtual worlds worth it?
However, when we put these two together, adding the power of social learning onto the formal 3D/spatial, and in the social adding the ephemeral ‘presence’ *and* then consider the possibility of 3D spatial collaboration (model building, not just diagram building), and amortize the overhead over a long term organizational uptake, I’m beginning to think that it may just have crossed the threshold.
That is, for formal learning, 3D and contextualization is really underestimated. For social learning, presence and representation may be underrated. And the combination may have emergent benefits.
In short, I think the social learning value of virtual worlds may have broader application than I’ve been giving credit for. Which isn’t even to mention what could come from bridging the social network across virtual, desktop, and even mobile! So, what say you?
Many years ago, I read of some work being done by Valerie Shute and Jeffrey Bonar that I later got a chance to actually play a (very small) role in (and even later got to work with Valerie, definitely world-class talent). They had developed three separate tutoring environments (geometric optics, economics, electrical circuits), yet the tutoring engine was essentially the same across all three, not domain specific. The clever thing they were doing was tutoring on exploration skills, varying one variable at a time, making reasonable increments in values to graph trends, etc.
Subsequent to that, I got involved again in games for learning. What naturally occurred to me was that you could put the same sort of meta-cognitive skill tutoring in a game environment, as you have to digitally create all the elements you’d need to track anyways for the game reasons, and it could be a layer on top. While this would work in a single game (and we did put a small version into the Quest game), it would be even better on top of a game engine. I even proposed it as a research project, but the grant reviewers thought that while a good idea, it was too ambitious (ahead of my time and underestimated :).
The reinterest in so-called 21st century skills, the kind Stephen Downes so eloquently calls an Operating System for the Mind, reawakens the opportunity. These skills are manifested in activity, and require an understanding of the activity to be able to infer approaches and provide feedback. In a well-defined arena like a designed game environment, we can know the goals and possible actions, and start looking for patterns of behavior.
Game engines, with their fixed primitives, make it easier to define what goals are and consequently to specify the particular goals and makes looking for patterns more generally definable. Thus, in a game, we can see whether the learners’ exploration is systematic, whether their attempts are as informative as possible, and possibly more.
This is also true of virtual worlds, although only when designed with goals (e.g. from a simulation to a scenario, whether tuned into a game or not). The benefit of a virtual world is, again, the primitives are fixed, simplifying the task of defining goals and actions.
Of course, building particular types of interaction (e.g. social), particular types of clues (e.g. audio versus visual) and looking for patterns can provide deeper opportunities. Really, such performance is initially an assessment (one of the facets of what we were doing on the Intellectricity project was building a learner characteristic assessment as a game), and that assessment can trigger intervention as a consequence. For any malleable skill, we have real opportunities.
Given that much of what is necessary are abilities to research , evaluate the quality of sources, design, experiment, create, and more, these environments are a fascinating opportunity. I’m not in a situation to lead such an initiative, but I still think it’s a worthwhile undertaking. Anyone ‘game’?
I recently attended the 3DTLC conference, as I reported before. Chuck Hamilton presented on his (IBM’s) take on affordances on virtual worlds. Given that I’ve opined before, I asked for more detail on their take, and he was kind enough to forward to me their definitions. I like what they’ve done, but it led me to try to refine what I see as some confounding (they actually separate several of their 10 into two separate ones), and try to capture what I think are core, what can be enabled, and what then arise from those capabilities.
I start with what I think are the core affordances of virtual worlds, that there’s a 3D world, that you can visit, and that’s digital. From there, I see that you can enable others to be there (social), you can enable action (agency), the world can be kept around (persistent), and it can be made accessible broadly (e.g. through the internet).
If you choose to enable those (and you should, in most cases), you get some emergent properties. Chuck talked about a universal visual language, and you certainly can both tap into, and establish, visual cues. The scale does not have to be real, but can indeed scale down to and up to any size you want, in part or all.
You can choose to be anonymous, but if you don’t and choose to have a representation that is active over time, you can establish a reputation.
By being active, you can also enable practice opportunities such as simulations, scenarios, and games. If agency includes not just interaction, but creation, and you have social, you can have co-creation (one of the most exciting opportunities for informal learning). The persistence of your activity creates the opportunity to capture traces for reflection, e.g. ‘after-action review’.
The fact that it’s digital means it can be augmented with external capability: media, applications, and more. Also, you can be at least geography-independent, if not chronologically-independent.
This is a preliminary stab at trying to trace the initial, potential, and consequently emergent affordances, by no means do I think it’s the definitive answer. Feedback solicited!