Tomi Ahonen gave a very entertaining keynote here at the Guild’s mLearn Conference. Here’s my mind map:
Wizardly Collaboration and HyperCard
I was talking to my colleague Harold Jarche the other day about the changes in work needs and it triggered a thought. Normally, when we talk about performance support and collaboration, we think of creating job aids. Yet I believe that, increasingly, interactive performance support will be more valuable in generating meaningful outcomes. It occurred to me that there was a missed opportunity: editable wizards.
Now, when I talk about wizards, I mean software tools that interact with us to ask some questions and then can use that information to do complex things for us like filling out our taxes or configure our email. This is fine for things that are static, but increasingly, things are dynamic. The question then becomes how we make more flexible, less brittle, tools.
In content, we are using wikis as tools that are open for collaborative updating. Wikipedia of course being the best known example. These are powerful ways for a community to keep a body of knowledge up to date. Can we have an intersection?
The idea that occurred to me was to have collaborative wizards; wizards written in a simple but reasonably powerful language that are open for editing. Rather than Wikzard, I thought I’d call it a Wizki (pronounced “whisky”, of course :).
Admittedly, having a simple but powerful language is non-trivial, but then I was reminded of HyperCard (which several of us reminisced about fondly just a short while ago). HyperCard was a simple environment to build applications in, with the property of ‘incremental advantage‘ that Andi diSessa touted years ago. Imagine having a collaborative HyperCard! It could be done.
Of course, there are other simple programming environments (Scratch comes to mind), but we really need a simple (and cross-platform!) environment to develop applications again, and moreover a collaborative one is the next logical step in user-generated content.
I reckon it is past time to develop passive content, and start sharing interactions. What do you say?
I’ve been podcasted!
Rob Penn, CEO of SuddenlySmart (makers of SmartBuilder, one of the new breed of authoring tools), interviewed me last fall about engaging learning: game design, simulations, etc. It followed one by Professor Allison Rossett of SDSU (also available at the site).
I always find it hard to listen to myself (my voice sounds much better in my head :), and the audio is a little murky, but I hit the usual important notes about focusing on decisions that learners need to be able to make, getting challenge right, capturing misconceptions, and more.
Rob also gets me to discriminate between simulations, scenarios, and games (simulations are just models, scenarios have an initial state and a goal state learners should get to, you can tune a scenario into a game), and I also elaborate how you go from multiple choice, through branching scenarios, to full simulation driven engines (jumping off from Rob’s question instead of first answering it, mea culpa!).
Feedback welcome!
John Romero keynote mind map #iel2010
Alan Kay keynote mindmap from #iel2010
Mobile Affordances
It occurred to me for several reason to think about mobile from the perspective of affordances. I’d done this before for virtual worlds, and it only seems right to do the same for mobile learning. So off to Graffle I went…
The core is portable processing power that is synced back into the environment. On top of that, we can have ubiquitous connectivity, we can connect to sensors that can recognize the world (e.g. cameras) and our context (e.g. GPS), and we can design capabilities that provide us content and computations power.
From those, we can link the content presentation with connectivity to communicate with others, take that capture and reflect upon it or share with others for their support and mentorship, we can be connected with people in context for live support, and we can layer content upon the context as augmented reality.
These capabilities can be layered. So using interactive content could be mobile games. When linked with augmented reality, we can start having alternate reality games.
This is a first cut, so I welcome feedback. What am I confounding? What am I missing?
Mobile as Main Mode
As I was booking my travel to San Diego for the eLearning Guild’s mLearning conference, mLearnCon (June 15-17), I thought about a conference focusing on mobile learning versus the regular, full, elearning conference or even a full training conference (congrats to Training magazine pulling a phoenix). And I wondered how much this is a niche thing versus the whole deal.
Now, I don’t think all of everything needs to be pulled through a mobile device, but the realization I had is that these devices are going to be increasingly ubiquitous, increasingly powerful, and consequently will be the go-to way individuals will augment their ability to work. Similarly, increasingly, workers will be mobile. Combining the two, it may be that support will be expected first on the personal device! While the nature of the way the device will be used will differ, desktops for long periods of time, mobile devices for short access, the way most ‘support’ of tasks will occur will be via mobile devices.
That is, people will use their mobile devices to contact colleagues, look for answers, access materials and tools ‘in the moment’. The benefits of desktops will be tools to do knowledge work, and there will be needs for information access, and colleague access, and collaboration, but increasingly we may want that when and where we want.
I’m thinking mobile could become the default target design, and desktop augments will be possible, versus the other way around. While you might want a desktop for big design work where screen real estate matters. For example, I’m designing diagrams on my iPad. I wouldn’t want to do it on my iPhone, but I am glad to take it with me in a smaller form-factor than a laptop. I may take back and polish on the laptop, but my new performance ecosystem is more distributed. And that’s the point.
Increasingly, we expect at least some access to our information wherever we are. (Yes, there are some folks who still eschew a mobile phone. There are people who still avoid a computer, or even electricity!) Mostly, however, we’re seeing people finding value in augmenting their capabilities digitally. And so, maybe we increasingly need to view augmentation as the baseline, and dedicated capability as the icing on the cake for specialized work.
This may be too much, but I hope you’re seeing that mobile is more than just a niche phenomenon. There are real opportunities on the table, and real benefits to be had. I’m surprised that it took so long, frankly, as I figured mobile was closer to ready-for-prime-time than virtual worlds. Now, however, while there are still compatibility problems, mobile really is ready to rock. Are you?
Apple missing the small picture
I’ve previously discussed the fight between Apple and Adobe about Flash (e.g. here), but I had a realization that I think is important. What I was talking about before was the potential to create a market place beyond text, graphics, and media, and to start capitalizing on learning interactivity. What was needed was a cross-platform capability.
Which Apple is blocking, for interactivity.
Apple allows cross-platform media players, whether hyperdocs (c.f. Outstart, Hot Lava, and Hybrid Learning) and media (e.g. video and audio formats are playable). What they’re not is cross-platform for interactivity.
Now, I understand that Apple’s rabidly focused on the customer experience (I like the usability), and limiting development to content is a way to essentially guarantee a vibrant experience. And I don’t care a fig about the claims about ‘openness’, which in both cases are merely a distraction. Frankly, I haven’t missed Flash on my iPhone or iPad. I hardly miss it on my browser (I have a Firefox extension that blocks it unless I explicitly open them, and I rarely use it; and I browse a lot)!
What I care about is that, by not supporting cross-platform programs that output code for different operating systems (OS), Apple is hurting a significant portion of the market.
I came to this thought from thinking about how companies should want to go beyond media to the next level. There will be situations where navigable content isn’t enough, and a company will want to provide interactivity, whether it’s a dynamic user order configuration tool, a diagnostic tool, or a learning simulation. There are times when content or a web-form just won’t cut it.
Big companies can probably afford dedicated programming to make these apps come to life on more than one platform: Windows Mobile, WebOS, Blackberry OS, Android, and iPhone OS (they need a name for their mobile OS now that the iPad’s around: MacOSMobile?), but others won’t.
What are small to medium sized companies supposed to do? They’d like to support their mobile workers with smartphones regardless of OS, but when they’re that 1-few person shop, they aren’t going to have the development resources. They might have a great idea for an app, and they probably have or can get a Flash programmer, but won’t have the capabilities to develop separately across platform. And no one’s convinced me that HTML 5 is going to bring the capability to even build Quest, let alone a training game with characteristics like Tips on Tap.
Worse, how about not-for-profits, or the education sector? How are these small organizations, with limited budgets, supposed to expand the markets? How can anyone develop an ability to transcend the current stranglehold of publishers on learning content?
Yes, the cross-platform developer might not carry the latest and greatest features of the OS forward, but they’re meeting real needs. There are the ‘for market’ applications, and the pure content plays, but there’s a middle ground that is going to increasingly comprehend the potential, but be shut out of the opportunity because they can’t develop a meaningful solution for their limited market that just needs capability, not polish.
I get that Flash isn’t efficient. I note that neither Adobe or Apple talk about their software development practices, so I don’t know whether either use some of the more trusted methods of good code development like agile programming, PSP & TSP, or refactoring, but I think that doesn’t matter. While I think in the long run it would be to their advantage, I think that even a slow and even slightly buggy version of a needed app would be better received and more useful than none.
I don’t have the email address to lob this at Steve directly like some have, but I’d like to see if he can comprehend and address the issue for the people caught in the situation where delivering interactivity could mean anything from more small-to-medium enterprise success, to meeting a real need in the community, to lifting our children to a higher learning plane, but they don’t have much in the way of resources.
Quite simply, a cross-platform interactivity solution really doesn’t undermine the Apple experience (look at the Mac environment), as it’s likely to be a small market. Heck, brand it as a 2nd Class app or something, but don’t leave out those who might have a real need for an easy cross platform capability.
I’m curious: do you think that the ability to go beyond navigable content to interactivity in a cross-platform way could be useful to a serious amount of people in a lot of little different pockets of activity?
When to LMS
Dave Wilkins, who I admire, has taken up the argument for the LMS in a long post, after a suite of posts (including mine). I know Dave ‘gets’ the value of social learning, but also wants folks to recognize the current state of the LMS, where major players have augmented the core LMS functions with social tools, tool repositories, and more. Without doing a point-by-point argument, since Dan Pontefract has eloquently done so, and also I agree with many of the points Dave makes. I want, however, to point to a whole perspective shift that characterizes where I come from.
I earlier made two points: one is that the LMS can be valuable if it has all the features. And if you want an integrated suite. Or if you need the LMS features as part of a larger federated suite. I made the analogy to the tradeoffs between a Swiss Army knife and a toolbox. Here, you either have a tool that has all the features you need, or you pull together a suite of separate tools with some digital ‘glue’. It may be that the glue is custom code from your IT department, or one tool that integrates one or more of the functions and can integrate other tools (e.g. SharePoint, as Harold Jarche points out on a comment to a subsequent Dave post).
The argument for the former is one tool, one payment, one support location, one integrated environment. I think that may make sense for a lot of companies, particularly small ones. Realize that there are tradeoffs, however. The main one, to me, is that you’re tied to the tools provided by the vendor. They may be great, or they may not. They may have only adequate, or truly superb capabilities. And as new things are developed, you either have to integrate into their tool, or wait for them to develop that capability on their priority.
Which, again, may still be an acceptable solution if the price is right and the functionality is there. However, only if it’s organized around tasks. If it’s organized around courses, all bets are off. Courses aren’t the answer any more!
However, if it’s not organized around courses, (and Dave has suggested that a modern LMS can be a portal-organized function around performance needs), then why the #$%^&* are you calling it an LMS? Call it something else (Dan calls it a Learning, Content, & Collaboration system or LCC)!
Which raises the question of whether you can actually manage learning. I think not. You can manage courses, but not learning. And this is an important distinction, not semantics. Part of my problem is the label. It leads people to make the mistake of thinking that their function is about ‘learning’ with a small ‘l’, the formal learning. Let me elaborate.
Jane Hart developed a model for organizational learning that really captures the richness of leraning. She talks about:
- FSL – Formal Structured Learning
- IOL – Intra-Organizational Learning
- GDL – Group Directed Learning
- PDL – Personal Directed Learning
- ASL – Accidental & Serendipitous Learning
The point I want to make here is that FSL is the compliance and certification stuff that LMS’ handle well. And if that’s all you see as the role of the learning unit, you’ll see that an LMS meets your needs. If you, instead, see the full picture, you’ll likely want to look at a richer suite of capabilities. You’ll want to support performance support, and you’ll absolutely want to support communication, collaboration, and more.
The misnomer that you can manage learning becomes far more clear when you look at the broader picture!
So, my initial response to Dave is that you might want the core LMS capabilities as part of a federated solution, and you might even be willing to use what’s termed LMS software if it really is LCC or a performance ecosystem solution, and are happy with the tradeoffs. However, you might also want to look at creating a more flexible environment with ‘glue’ (still with single sign-on, security, integration, etc, if your IT group or integration tool is less than half-braindead).
But I worry that unless people are clued in, selling them (particularly with LMS label) lulls them into a false confidence. I don’t accuse Dave of that, by the way, as he has demonstrably been carrying the ‘social’ banner, but it’s a concern for the industry. And I haven’t even talked about how, if you’re still talking about ‘managing’ learning, you might not have addressed the issues of trust, value, and culture in the community you purport to support.
Performer-focused Integration
On a recent night, I was part of a panel on the future of technical communication with the local chapter of the Society for Technical Communication, and there were several facets of the conversation that I found really interesting. Our host had pulled together an XML architecture consultant who’s deep into content models (e.g. DITA) and tools, Yas Etassam, and another individual who started a very successful technical writing firm, Meryl Natchez. And, of course, me.
My inclusion shouldn’t be that much of a surprise. The convener had heard me speak on the performance ecosystem (via Enterprise 2.0, with a nod to my ITA colleagues), and I’d included mention of content models, learning experience design, etc. My background in interface design (e.g. studying under Don Norman, as a consequence teaching interface design at UNSW), and work with publishers and adaptive systems using content models, means I’ve been touching a lot of their work and gave a different perspective.
It was a lively session, with us disagreeing and then finding the resolution, both to our edification as well as the audiences. We covered new devices, tools, and movements in corporate approaches to supporting performance, as well as shifts in skill sets.
The first topic that I think is of interest was the perspective they took on their role. They talk about ‘content’ and include learning content as well. I queried that, asking whether they saw their area of responsibility covering formal learning as well, and was surprised to hear them answer in the affirmative. After all, it’s all content. I countered with the expected: “it’s about the experience” stance, to which Meryl replied to the effect of “if I’m working, I just want the information, not an experience”. We reconciled that formal learning, when learners need support for motivation and context, needed the sort of experience I was talking about, but even her situation required the information coming in a way that wasn’t disruptive: we needed to think about the performer experience.
The other facet to this was the organizational structure in this regard. Given the view that it’s all content, I asked whether they thought they covered formal learning, and they agreed that they didn’t deliver training, but often technical writers create training materials: manuals, even online courses. Yet they also agreed, when pushed, that most organizations weren’t so structured, and documentation was separate from training. And we all agreed that, going forward, this was a problem. I pushed the point that knowledge was changing faster than their processes could cope, and they agreed. We also agreed that breaking down those silos and integrating performance support, documentation, learning, eCommunity, and more was increasingly necessary.
This raised the question of what to do about user generated content: I was curious what they saw as their role in this regard. They took on a content management stance, for one, suggesting that it’s content and needed to be stored and made searchable. Yas talked about the powerful systems that folks are using to develop and manage content. We also discussed the analogy to learning in that the move is from content production to content production facilitation.
One of the most interesting revelations for me actually came before the panel in the networking and dinner section, where I learned about Topic-Based Authoring. I’ve been a fan of content models for over a decade now, from back when I was talking about granularity of learning objects. The concept I was promoting was to write tightly around definitions for introduction components, concept presentations, examples, practice items, etc. It takes more discipline, but the upside is much more powerful opportunities to start doing the type of smart delivery that we’re now capable of and even seeing. Topic-based is currently applied for technical needs (e.g. performance support) which is enough reason, but there can and should be educational applications as wellThe technical publications area is a bit ahead on this front. Topic-based authoring is a discipline around this approach that provides the rigor needed to make it work.
Meryl pointed out how the skill set shift needn’t be unitary: there were a lot of areas that are related in their world: executive communications, content management, information architecture, even instructional design is a potential path. The basics of writing were still necessary, but like in our field, facilitation skills for user-generated content may still play a role. The rate of change means that the technical writers, just like instructional designers, won’t be able to produce all the needed information, and that a way for individuals to develop materials would be needed. As mentioned above, Yas just cared that they did the necessary tagging! Which gets into interest system areas about how can we make that process as automatic as possible and minimize the onerous part of the work.
The integration we need is for all those who are performer-focused to not be working in ignorance of (let alone opposition to) each other. Formal learning should be developed in awareness of the job aids that will be used, and vice-versa. The flow from marketing to engineering has to stop forking as the same content gets re-purposed for documentation, customer training, sales training, and customer service, but instead have a coherent path that populates each systematically.