Augmented Reality (AR) is on the upswing, and I think this is a good thing. I think AR makes sense, and it’s nice to see both solid tool support and real use cases emerging. Here’s the news, but first, a brief overview of why I like AR.
As I’ve noted before, our brains are powerful, but flawed. As with any architecture, any one choice will end up with tradeoffs. And we’ve traded off detail for pattern-matching. And, technology is the opposite: it’s hard to get technology to do pattern matching, but it’s really good at rote. Together, they’re even more powerful. The goal is to most appropriately augment our intellect with technology to create a symbiosis where the whole is greater than the sum of the parts.
Which is why I like AR: it’s about annotating the world with information, which augments it to our benefit. It’s contextual, that is, doing things because of when and where we are. AR augments sensorily, either auditory or visual (or kinesthetic, e.g. vibration). Auditory and kinesthetic annotation is relatively easy; devices generate sounds or vibrations (think GPS: “turn left here”). Non-coordinated visual information, information that’s not overlaid visually, is presented as either graphics or text (think Yelp: maps and distances to nearby options). Tools already exist to do this, e.g. ARIS. However, arguably the most compelling and interesting is the aligned visuals.
Google Glass was a really interesting experiment, and it’s back. The devices – glasses with camera and projector that can present information on the glass – were available, but didn’t do much because of where you were looking. There were generic heads-up displays and camera, but little alignment between what was seen and what was consequently presented to the user with additional information. That’s changed. Google Glass has a new Enterprise Edition, and it’s being used to meet real needs and generate real outcomes. Glasses are supporting accurate placement in manufacturing situations requiring careful placement. The necessary components and steps are being highlighted on screen, and reducing errors and speeding up outcomes.
And Apple has released it’s Augmented Reality software toolkit, ARKit, with features to make AR easy. One interesting aspect is built-in machine learning, which could make aligning with objects in the world easy! Incompatible platforms and standards impede progress, but with Google and Apple creating tools for each of their platforms, development can be accelerated. (I hope to find out more at the eLearning Guild’s Realities 360 conference.)
While I think Virtual Reality (VR) has an important role to play for deep learning, I think contextual support can be a great support for extending learning (particularly personalization), as well as performance support. That’s why I’m excited about AR. My vision has been that we’ll have a personal coaching system that will know where and when we are and what our goals are, and be able to facilitate our learning and success. Tools like these will make it easier than ever.
Leave a Reply