Robert Scoble has written about Qualcomm’s announcement of a new level of mobile device awareness. He characterizes the phone transitions from voice (mobile 1.0) to tapping (2.0) to the device knowing what to do (3.0). While I’d characterize it differently, he’s spot on about the importance of this new capability.
I’ve written before about how the missed opportunity is context awareness, specifically not just location but time. What Qualcomm has created is a system that combines location awareness, time awareness, and the ability to build and leverage a rich user profile. Supposedly, according to Robert, it’s also tapped into the accelerometer, altimeter, whatever sensors there are. It’ll be able to know in pretty fine detail a lot more about where you are and doing.
Gimbal is mostly focused on marketing (of course, sigh), but imagine what we could do for learning and performance support!
We can now know who you are and what you’re doing, so:
- a sales team member visiting a client would get specialized information different than what a field service tech would get at the same location.
- a student of history would get different information at a particular location such as Boston than an architecture student would
- a person learning how to manage meetings more efficiently would get different support than a person working on making better presentations
I’m sure you can see where this is going. It may well be that we can coopt the Gimbal platform for learning as well. We’ve had the capability before, but now it may be much easier by having an SDK available. Writing rules to take advantage of all the sensors is going to be a big chore, ultimately, but if they do the hard yards for their needs, we may be able to ride on the their coattails for ours. It may be an instance when marketing does our work for us!
Mobile really is a game changer, and this is just another facet taking it much further along the digital human augmentation that’s making us much more effective in the moment, and ultimately more capable over time. Maybe even wiser. Think about that.
Graham Mills says
You might want to checkout Lumiya which is an Android viewer for SL and OpenSim. It now has a basic 3D view and will shortly be GPS-enabled. That means you can build a virtual environment that mirrors RL but has additional content depending on what sim/region you are logged into and which changes view as you move around. The content can be interactive and is intrinsically multi-user. See my blog for further info or search Google Play.
Virginia Yonkers says
I don’t know. I think this is kind of scary, especially if misused. Already I get frustrated when I’m online or on the phone with a customer service person that relies solely on the computer output. I think there just needs to be some personal analysis and human to human interaction, especially in learning. I spend most of my time trying to determine which questions I should be asking which will help the student the most. On the other hand, having as much access to information as possible about the student will help in the analysis. This is where the higher level of mobile technology will come in handy.