A number of years ago, when we were at the height of the hype about Virtual Worlds (computer rendered 3D social worlds, e.g. Second Life), I was thinking about the affordances. And one that I thought was intriguing was co-creating, in particular collaboratively creating models that were explanatory and predictive. And in thinking again about Augmented Reality (AR), I realized we had this opportunity again.
Models are hard enough to capture in 2D, particularly if they’re complex. Having a 3rd dimension can be valuable. Similarly if we’re trying to match how the components are physically structured (think of a model of a refinery, for instance, or a power plant). Creating it can be challenging, particularly if you’re trying to map out a new understanding. And, we know that collaboration is more powerful than solo ideation. So, a real opportunity is to collaborate to create models.
And in the old Virtual Worlds, a number had ways to create 3D objects. It wasn’t easy, as you had to learn the interface commands to accomplish this task, but the worlds were configurable (e.g. you could build things) and you could build models. There was also the overall cognitive and processing overhead inherent to the worlds, but these were a given to use the worlds at all.
What I was thinking of, extending my thoughts about AR in general, that annotating the world is valuable, but how about collaboratively annotating the world? If we can provide mechanisms (e.g. gestures) for people to not just consume, but create the models ‘in world’ (e.g. while viewing, not offline), we can find some powerful learning opportunities, both formal and informal. Yes, there are issues in creating and developing abilities with a standard ‘model-building’ language, particularly if it needs to be aligned to the world, but the outcomes could be powerful.
For formal, imagine asking learners to express their understanding. Many years ago, I was working with Kathy Fisher on semantic networks, where she had learners express their understanding of the digestive system and was able to expose misconceptions. Imagine asking learners to represent their conceptions of causal and other relationships. They might even collaborate on doing that. They could also just build 3D models not aligned to the world (though that doesn’t necessarily require AR).
And for informal learning, having team or community members working to collaboratively annotate their environment or represent their understanding could solve problems and advance a community’s practices. Teams could be creating new products, trouble-shooting, or more, with their models. And communities could be representing their processes and frameworks.
This wouldn’t necessarily have to happen in the real world if the options weren’t aligned to external context, so perhaps VR could be used. At a client event last week, I was given the chance to use a VR headset (Google Cardboard), and immerse myself in the experience. It might not need to be virtual (instead collaboration could be just through networked computers, but there was data from research into virtual reality that suggests better learning outcomes.
Richer technology and research into cognition starts giving us powerful new ways to augment our intelligence and co-create richer futures. While in some sense this is an extension of existing practices, it’s leveraging core affordances to meet conceptually valuable needs. That’s my model, what’s yours?
Leave a Reply