Here’s my mind map of John Romero’s keynote on social gaming (again, done with OmniGraffle on my iPad) (smaller then Kay, as he only talked for half an hour):
Alan Kay keynote mindmap from #iel2010
Mobile Affordances
It occurred to me for several reason to think about mobile from the perspective of affordances. I’d done this before for virtual worlds, and it only seems right to do the same for mobile learning. So off to Graffle I went…
The core is portable processing power that is synced back into the environment. On top of that, we can have ubiquitous connectivity, we can connect to sensors that can recognize the world (e.g. cameras) and our context (e.g. GPS), and we can design capabilities that provide us content and computations power.
From those, we can link the content presentation with connectivity to communicate with others, take that capture and reflect upon it or share with others for their support and mentorship, we can be connected with people in context for live support, and we can layer content upon the context as augmented reality.
These capabilities can be layered. So using interactive content could be mobile games. When linked with augmented reality, we can start having alternate reality games.
This is a first cut, so I welcome feedback. What am I confounding? What am I missing?
Mobile as Main Mode
As I was booking my travel to San Diego for the eLearning Guild’s mLearning conference, mLearnCon (June 15-17), I thought about a conference focusing on mobile learning versus the regular, full, elearning conference or even a full training conference (congrats to Training magazine pulling a phoenix). And I wondered how much this is a niche thing versus the whole deal.
Now, I don’t think all of everything needs to be pulled through a mobile device, but the realization I had is that these devices are going to be increasingly ubiquitous, increasingly powerful, and consequently will be the go-to way individuals will augment their ability to work. Similarly, increasingly, workers will be mobile. Combining the two, it may be that support will be expected first on the personal device! While the nature of the way the device will be used will differ, desktops for long periods of time, mobile devices for short access, the way most ‘support’ of tasks will occur will be via mobile devices.
That is, people will use their mobile devices to contact colleagues, look for answers, access materials and tools ‘in the moment’. The benefits of desktops will be tools to do knowledge work, and there will be needs for information access, and colleague access, and collaboration, but increasingly we may want that when and where we want.
I’m thinking mobile could become the default target design, and desktop augments will be possible, versus the other way around. While you might want a desktop for big design work where screen real estate matters. For example, I’m designing diagrams on my iPad. I wouldn’t want to do it on my iPhone, but I am glad to take it with me in a smaller form-factor than a laptop. I may take back and polish on the laptop, but my new performance ecosystem is more distributed. And that’s the point.
Increasingly, we expect at least some access to our information wherever we are. (Yes, there are some folks who still eschew a mobile phone. There are people who still avoid a computer, or even electricity!) Mostly, however, we’re seeing people finding value in augmenting their capabilities digitally. And so, maybe we increasingly need to view augmentation as the baseline, and dedicated capability as the icing on the cake for specialized work.
This may be too much, but I hope you’re seeing that mobile is more than just a niche phenomenon. There are real opportunities on the table, and real benefits to be had. I’m surprised that it took so long, frankly, as I figured mobile was closer to ready-for-prime-time than virtual worlds. Now, however, while there are still compatibility problems, mobile really is ready to rock. Are you?
Apple missing the small picture
I’ve previously discussed the fight between Apple and Adobe about Flash (e.g. here), but I had a realization that I think is important. What I was talking about before was the potential to create a market place beyond text, graphics, and media, and to start capitalizing on learning interactivity. What was needed was a cross-platform capability.
Which Apple is blocking, for interactivity.
Apple allows cross-platform media players, whether hyperdocs (c.f. Outstart, Hot Lava, and Hybrid Learning) and media (e.g. video and audio formats are playable). What they’re not is cross-platform for interactivity.
Now, I understand that Apple’s rabidly focused on the customer experience (I like the usability), and limiting development to content is a way to essentially guarantee a vibrant experience. And I don’t care a fig about the claims about ‘openness’, which in both cases are merely a distraction. Frankly, I haven’t missed Flash on my iPhone or iPad. I hardly miss it on my browser (I have a Firefox extension that blocks it unless I explicitly open them, and I rarely use it; and I browse a lot)!
What I care about is that, by not supporting cross-platform programs that output code for different operating systems (OS), Apple is hurting a significant portion of the market.
I came to this thought from thinking about how companies should want to go beyond media to the next level. There will be situations where navigable content isn’t enough, and a company will want to provide interactivity, whether it’s a dynamic user order configuration tool, a diagnostic tool, or a learning simulation. There are times when content or a web-form just won’t cut it.
Big companies can probably afford dedicated programming to make these apps come to life on more than one platform: Windows Mobile, WebOS, Blackberry OS, Android, and iPhone OS (they need a name for their mobile OS now that the iPad’s around: MacOSMobile?), but others won’t.
What are small to medium sized companies supposed to do? They’d like to support their mobile workers with smartphones regardless of OS, but when they’re that 1-few person shop, they aren’t going to have the development resources. They might have a great idea for an app, and they probably have or can get a Flash programmer, but won’t have the capabilities to develop separately across platform. And no one’s convinced me that HTML 5 is going to bring the capability to even build Quest, let alone a training game with characteristics like Tips on Tap.
Worse, how about not-for-profits, or the education sector? How are these small organizations, with limited budgets, supposed to expand the markets? How can anyone develop an ability to transcend the current stranglehold of publishers on learning content?
Yes, the cross-platform developer might not carry the latest and greatest features of the OS forward, but they’re meeting real needs. There are the ‘for market’ applications, and the pure content plays, but there’s a middle ground that is going to increasingly comprehend the potential, but be shut out of the opportunity because they can’t develop a meaningful solution for their limited market that just needs capability, not polish.
I get that Flash isn’t efficient. I note that neither Adobe or Apple talk about their software development practices, so I don’t know whether either use some of the more trusted methods of good code development like agile programming, PSP & TSP, or refactoring, but I think that doesn’t matter. While I think in the long run it would be to their advantage, I think that even a slow and even slightly buggy version of a needed app would be better received and more useful than none.
I don’t have the email address to lob this at Steve directly like some have, but I’d like to see if he can comprehend and address the issue for the people caught in the situation where delivering interactivity could mean anything from more small-to-medium enterprise success, to meeting a real need in the community, to lifting our children to a higher learning plane, but they don’t have much in the way of resources.
Quite simply, a cross-platform interactivity solution really doesn’t undermine the Apple experience (look at the Mac environment), as it’s likely to be a small market. Heck, brand it as a 2nd Class app or something, but don’t leave out those who might have a real need for an easy cross platform capability.
I’m curious: do you think that the ability to go beyond navigable content to interactivity in a cross-platform way could be useful to a serious amount of people in a lot of little different pockets of activity?
When to LMS
Dave Wilkins, who I admire, has taken up the argument for the LMS in a long post, after a suite of posts (including mine). I know Dave ‘gets’ the value of social learning, but also wants folks to recognize the current state of the LMS, where major players have augmented the core LMS functions with social tools, tool repositories, and more. Without doing a point-by-point argument, since Dan Pontefract has eloquently done so, and also I agree with many of the points Dave makes. I want, however, to point to a whole perspective shift that characterizes where I come from.
I earlier made two points: one is that the LMS can be valuable if it has all the features. And if you want an integrated suite. Or if you need the LMS features as part of a larger federated suite. I made the analogy to the tradeoffs between a Swiss Army knife and a toolbox. Here, you either have a tool that has all the features you need, or you pull together a suite of separate tools with some digital ‘glue’. It may be that the glue is custom code from your IT department, or one tool that integrates one or more of the functions and can integrate other tools (e.g. SharePoint, as Harold Jarche points out on a comment to a subsequent Dave post).
The argument for the former is one tool, one payment, one support location, one integrated environment. I think that may make sense for a lot of companies, particularly small ones. Realize that there are tradeoffs, however. The main one, to me, is that you’re tied to the tools provided by the vendor. They may be great, or they may not. They may have only adequate, or truly superb capabilities. And as new things are developed, you either have to integrate into their tool, or wait for them to develop that capability on their priority.
Which, again, may still be an acceptable solution if the price is right and the functionality is there. However, only if it’s organized around tasks. If it’s organized around courses, all bets are off. Courses aren’t the answer any more!
However, if it’s not organized around courses, (and Dave has suggested that a modern LMS can be a portal-organized function around performance needs), then why the #$%^&* are you calling it an LMS? Call it something else (Dan calls it a Learning, Content, & Collaboration system or LCC)!
Which raises the question of whether you can actually manage learning. I think not. You can manage courses, but not learning. And this is an important distinction, not semantics. Part of my problem is the label. It leads people to make the mistake of thinking that their function is about ‘learning’ with a small ‘l’, the formal learning. Let me elaborate.
Jane Hart developed a model for organizational learning that really captures the richness of leraning. She talks about:
- FSL – Formal Structured Learning
- IOL – Intra-Organizational Learning
- GDL – Group Directed Learning
- PDL – Personal Directed Learning
- ASL – Accidental & Serendipitous Learning
The point I want to make here is that FSL is the compliance and certification stuff that LMS’ handle well. And if that’s all you see as the role of the learning unit, you’ll see that an LMS meets your needs. If you, instead, see the full picture, you’ll likely want to look at a richer suite of capabilities. You’ll want to support performance support, and you’ll absolutely want to support communication, collaboration, and more.
The misnomer that you can manage learning becomes far more clear when you look at the broader picture!
So, my initial response to Dave is that you might want the core LMS capabilities as part of a federated solution, and you might even be willing to use what’s termed LMS software if it really is LCC or a performance ecosystem solution, and are happy with the tradeoffs. However, you might also want to look at creating a more flexible environment with ‘glue’ (still with single sign-on, security, integration, etc, if your IT group or integration tool is less than half-braindead).
But I worry that unless people are clued in, selling them (particularly with LMS label) lulls them into a false confidence. I don’t accuse Dave of that, by the way, as he has demonstrably been carrying the ‘social’ banner, but it’s a concern for the industry. And I haven’t even talked about how, if you’re still talking about ‘managing’ learning, you might not have addressed the issues of trust, value, and culture in the community you purport to support.
Performer-focused Integration
On a recent night, I was part of a panel on the future of technical communication with the local chapter of the Society for Technical Communication, and there were several facets of the conversation that I found really interesting. Our host had pulled together an XML architecture consultant who’s deep into content models (e.g. DITA) and tools, Yas Etassam, and another individual who started a very successful technical writing firm, Meryl Natchez. And, of course, me.
My inclusion shouldn’t be that much of a surprise. The convener had heard me speak on the performance ecosystem (via Enterprise 2.0, with a nod to my ITA colleagues), and I’d included mention of content models, learning experience design, etc. My background in interface design (e.g. studying under Don Norman, as a consequence teaching interface design at UNSW), and work with publishers and adaptive systems using content models, means I’ve been touching a lot of their work and gave a different perspective.
It was a lively session, with us disagreeing and then finding the resolution, both to our edification as well as the audiences. We covered new devices, tools, and movements in corporate approaches to supporting performance, as well as shifts in skill sets.
The first topic that I think is of interest was the perspective they took on their role. They talk about ‘content’ and include learning content as well. I queried that, asking whether they saw their area of responsibility covering formal learning as well, and was surprised to hear them answer in the affirmative. After all, it’s all content. I countered with the expected: “it’s about the experience” stance, to which Meryl replied to the effect of “if I’m working, I just want the information, not an experience”. We reconciled that formal learning, when learners need support for motivation and context, needed the sort of experience I was talking about, but even her situation required the information coming in a way that wasn’t disruptive: we needed to think about the performer experience.
The other facet to this was the organizational structure in this regard. Given the view that it’s all content, I asked whether they thought they covered formal learning, and they agreed that they didn’t deliver training, but often technical writers create training materials: manuals, even online courses. Yet they also agreed, when pushed, that most organizations weren’t so structured, and documentation was separate from training. And we all agreed that, going forward, this was a problem. I pushed the point that knowledge was changing faster than their processes could cope, and they agreed. We also agreed that breaking down those silos and integrating performance support, documentation, learning, eCommunity, and more was increasingly necessary.
This raised the question of what to do about user generated content: I was curious what they saw as their role in this regard. They took on a content management stance, for one, suggesting that it’s content and needed to be stored and made searchable. Yas talked about the powerful systems that folks are using to develop and manage content. We also discussed the analogy to learning in that the move is from content production to content production facilitation.
One of the most interesting revelations for me actually came before the panel in the networking and dinner section, where I learned about Topic-Based Authoring. I’ve been a fan of content models for over a decade now, from back when I was talking about granularity of learning objects. The concept I was promoting was to write tightly around definitions for introduction components, concept presentations, examples, practice items, etc. It takes more discipline, but the upside is much more powerful opportunities to start doing the type of smart delivery that we’re now capable of and even seeing. Topic-based is currently applied for technical needs (e.g. performance support) which is enough reason, but there can and should be educational applications as wellThe technical publications area is a bit ahead on this front. Topic-based authoring is a discipline around this approach that provides the rigor needed to make it work.
Meryl pointed out how the skill set shift needn’t be unitary: there were a lot of areas that are related in their world: executive communications, content management, information architecture, even instructional design is a potential path. The basics of writing were still necessary, but like in our field, facilitation skills for user-generated content may still play a role. The rate of change means that the technical writers, just like instructional designers, won’t be able to produce all the needed information, and that a way for individuals to develop materials would be needed. As mentioned above, Yas just cared that they did the necessary tagging! Which gets into interest system areas about how can we make that process as automatic as possible and minimize the onerous part of the work.
The integration we need is for all those who are performer-focused to not be working in ignorance of (let alone opposition to) each other. Formal learning should be developed in awareness of the job aids that will be used, and vice-versa. The flow from marketing to engineering has to stop forking as the same content gets re-purposed for documentation, customer training, sales training, and customer service, but instead have a coherent path that populates each systematically.
Training Book Reviews
The eminent Jane Bozarth has started a new site called Training Book Reviews. Despite the unfortunate name, I think it’s a great idea: a site for book reviews for those of us passionate about solving workplace performance needs. While submitting new reviews would be great, she notes:
share a few hundred words
1) on a favorite, must-own title, or maybe even
2) of criticism about a venerated work that has perhaps developed an undeserved glow
In the interest of sparking your participation (for instance, someone should write a glowing review of Engaging Learning :), here’s a contribution:
More than 20 years ago now, Donald Norman released what subsequently became the first of a series of books on design. My copy is titled The Psychology of Everyday Things, (he liked the acronym POET) but based upon feedback, it was renamed The Design of Everyday Things as it really was a fundamental treatise on design. And it has become a classic. (Disclaimer, he was my PhD advisor while he was writing this book.)
Have you ever burned yourself trying to get the shower water flow and temperature right? Had trouble figuring out which knob to turn to turn on a particular burner on the stove? Push on a door that pulls or vice-versa? Don explains why. The book looks at how our minds interact with the world, how we use the clues that our current environment provides us coupled with our prior experience to figure out how to do things. And how designers violate those expectations in ways that reliably lead to frustration. While Don’s work on design had started with human-computer interaction and user-centered design, this book is much more general. Quite simply, you will find that you look at everyday things: shower controls, door handles, and more in a whole new way.
The understanding of how we understand the world is not just for furniture designers, or interface designers, but is a critical component of how learning designers need to think. While his subsequent books, including Things That Make Us Smart and Emotional Design, add deeper cognition and engagement (respectively) and more, the core understanding from this first book provides a foundation that you can (and should) apply directly.
Short, pointed, and clear, this book will have you nodding your head in agreement when you recognize the frustrations you didn’t even know you were experiencing. It will, quite simply, change the way you look at the world, and improve your ability to design learning experiences. A must read.
Interactivity & Mobile Development
A while ago, I characterized the stages of web development as:
- Web 1.0: producer-generated content, where you had to be able to manage a server and work in obscure codes
- Web 2.0: user-generated content, where web tools allowed anyone to generate web content
- Web 3.0: system-generated content, where engines or agents will custom-assemble content for you based upon what’s known about you, what context you’re in, what content’s available, etc
It occurred to me that an analogous approach may be useful in thinking about interactivity. To understand the problem, realize that there has been a long history of attempts to characterize different levels of interactivity, e.g. Rod Sims’ paper for ITFORUM, for a variety of reasons. More recently, interactivity has been proposed as a item to tag within learning object systems to differentiate objects. Unfortunately, the taxonomy has been ‘low’, ‘medium’ and ‘high’ without any parameters to distinguish between them. Very few people, without some guidance, are going to want to characterize their content as ‘low’ interactivity.
Thinking from the perspective of mobile content, it occurred to me that I see 3 basic levels of interaction. One is essentially passive: you watch a video, listen to an audio, or read a document (text potentially augmented by graphics). This is roughly equivalent to producer-generated content. The next level would be navigable content. Most specifically, it’s hyper-documents (e.g. like the web), where users can navigate to what they want. This comes into play for me on mobile, as both static content and navigable content are easily done cross-platform. I note that user-generated content through most web interfaces is technically beyond this level.
The next level is system-generated interaction, where what you’ve done has an effect on what happens next. The web is largely state-independent, though that’s changing (e.g. Amazon’s mass-customization). This is where you have some computation going on in the background, whether it’s form processing or full game interaction. And, this is where mobile falls apart. Rich computation and associated graphics are hard to do. Flash has been the lingua franca of online interactivity, supporting delivery cross-platform. However, Flash hasn’t run well on mobile devices, it is claimed, for performance reasons. Yet there is no other cross-platform environment, really. You have to compile for each platform independently.
This analysis provides 3 meaningful levels of interactivity for defining content, and indicates what is currently feasible and what still provides barriers for mobile as well. The mobile levels will change, perhaps if HTML 5 can support more powerful computation, interaction, and graphics, or if the performance problems (or perception thereof) go away. Fingers crossed!
Better design doesn’t take longer!
I wrote a screed on this topic over at eLearn Mag, which I highly recommend. In short:
Better design takes no more time* and yields better outcomes
(*after an initial transition period).
I look forward to your thoughts!