So, one of the two things I did last week was attend the eLearning Guild‘s Realities 36o conference. Ostensibly about Augmented Reality (AR) and Virtual Reality (VR), it ended up being much more about VR. Which isn’t a bad thing, it’s probably as much a comment on the state of the industry as anything. However, there were some interesting learnings for me, and I thought I’d share them.
First, I had a very strong visceral exposure to VR. While I’ve played with Cardboard on the iPhone (you can find a collection of resources for Cardboard here), it’s not quite the same as a full VR experience. The conference provided a chance to try out apps for the HTC Vive, Sony Playstation VR, and the Oculus. On the Vive, I tried a game where you shot arrows at attackers. It was quite fun, but mostly developed some motor skills. On the Oculus, I flew an XWing fighter through an asteroid field and escorted a ship and shoot enemy Tie-fighters. Again, fun, but mostly about training my motor skills in this environment.
It was the one I think on the Vive that gave me an experience. In it, you’re floating around the International Space Station. And it was very cool to see the station and experience the immersion of 3D, but it was very uncomfortable. Partly because I was trying to fly around (instead of using handholds), my viewpoint would fly through the bulkhead doors. However, the positioning meant that it gave the visual clues that my chest was going through the metal edge. This was extremely disturbing to me! As I couldn’t control it well, I was doing this continually, and I didn’t like it. Partly it was the control, but it was also the total immersion. And that was impressive!
There are empirical results that demonstrate better learning outcomes for VR, and certainly I can see that particularly, for tasks inherently 3D. There’s also another key result, as was highlighted in the first keynote: that VR is an ’empathy’ machine. There have been uses for things like understanding the world according to a schizophrenic, and a credit card call center helping employees understand the lives of card-users.
On principle, such environs should support near transfer when designed to closely mimic the actual performance environment. (Think: flight or medicine simulators.) And the tools are getting better. There’s an app that allows you to take photos of a place to put into Cardboard, and game engines (Unity or Unreal or both) will now let you import AutoCAD models. There was also a special camera that could sense the distances in a space and automatically generate a model of it. The point being that it’s getting easier and easier to generate VR environments.
That, I think, is what’s holding AR back. You can fairly easily use it for marker or location based information, but actually annotating the world visually is still challenging. I still think AR is of more interest, (maybe just to me), because I see it eventually creating the possibility to see the causes and factors behind the world, and allow us to understand it better. I could argue that VR is just extending sims from flat screen to surround, but then I think about the space station, and…I’m still pondering that. Is it revolutionary or just evolutionary?
One session talked about trying to help folks figure out when VR and AR made sense, and this intrigued me. It reminded me that I had tried to characterize the affordances of virtual worlds, and I reckon it’s time to take a stab at doing this for VR and AR. I believed then that I was able to predict when virtual worlds would continue to find value, and I think results have borne that out. So, the intent is to try to get on top of when VR and AR make sense. Stay tuned!