I was watching a blab (a video chat tool) about the upcoming FocusOn Learning, a new event from the eLearning Guild. This conference combines their previous mLearnCon and Performance Support Symposium with the addition of video. The previous events have been great, and I’ll of course be there (offering a workshop on cognition for mobile, a mobile learning 101 session, and one on the topic of this post). Listening to folks talk about the conference led me to ponder the connection, and something struck me.
I find it kind of misleading that it’s FocusOn Learning, given that performance support, mobile, and even video typically is more about acting in the moment than developing over time. Mobile device use tends to be more about quick access than extended experience. Performance support is more about augmenting our cognitive capabilities. Video (as opposed to animation or images or graphics, and similar to photos) is about showing how things happen in situ (I note that this is my distinction, and they may well include animation in their definition of video, caveat emptor). The unifying element to me is context.
So, mobile is a platform. It’s a computational medium, and as such is the same sort of computational augment that a desktop is. Except that it can be with you. Moreover, it can have sensors, so not just providing computational capabilities where you are, but because of when and where you are.
Performance support is about providing a cognitive augment. It can be any medium – paper, audio, digital – but it’s about providing support for the gaps in our mental capabilities. Our architecture is powerful, but has limitations, and we can provide support to minimize those problems. It’s about support in the moment, that is, in context.
And video, like photos, inherently captures context. Unlike an animation that represents conceptual distinctions separated from the real world along one or more dimensions, a video accurately captures what the camera sees happening. It’s again about context.
And the interesting thing to me is that we can support performance in the moment, whether a lookup table or a howto video, without learning necessarily happening. And that’s OK! It’s also possible to use context to support learning, and in fact we can provide least material to augment a context than create an artificial context which so much of learning requires.
What excited me was that there was a discussion about AR and AI. And these, to me, are also about context. Augmented Reality layers information on top of your current context. And the way you start doing contextually relevant content delivery is with rules tied to content descriptors (content systems), and such rules are really part of an intelligently adaptive system.
So I’m inclined to think this conference is about leveraging context in intelligent ways. Or that it can be, will be, and should be. Your mileage may vary ;).
Leave a Reply