Here’s my mind map of John Romero’s keynote on social gaming (again, done with OmniGraffle on my iPad) (smaller then Kay, as he only talked for half an hour):
Search Results for: align
Alan Kay keynote mindmap from #iel2010
Mobile Affordances
It occurred to me for several reason to think about mobile from the perspective of affordances. I’d done this before for virtual worlds, and it only seems right to do the same for mobile learning. So off to Graffle I went…
The core is portable processing power that is synced back into the environment. On top of that, we can have ubiquitous connectivity, we can connect to sensors that can recognize the world (e.g. cameras) and our context (e.g. GPS), and we can design capabilities that provide us content and computations power.
From those, we can link the content presentation with connectivity to communicate with others, take that capture and reflect upon it or share with others for their support and mentorship, we can be connected with people in context for live support, and we can layer content upon the context as augmented reality.
These capabilities can be layered. So using interactive content could be mobile games. When linked with augmented reality, we can start having alternate reality games.
This is a first cut, so I welcome feedback. What am I confounding? What am I missing?
Mobile as Main Mode
As I was booking my travel to San Diego for the eLearning Guild’s mLearning conference, mLearnCon (June 15-17), I thought about a conference focusing on mobile learning versus the regular, full, elearning conference or even a full training conference (congrats to Training magazine pulling a phoenix). And I wondered how much this is a niche thing versus the whole deal.
Now, I don’t think all of everything needs to be pulled through a mobile device, but the realization I had is that these devices are going to be increasingly ubiquitous, increasingly powerful, and consequently will be the go-to way individuals will augment their ability to work. Similarly, increasingly, workers will be mobile. Combining the two, it may be that support will be expected first on the personal device! While the nature of the way the device will be used will differ, desktops for long periods of time, mobile devices for short access, the way most ‘support’ of tasks will occur will be via mobile devices.
That is, people will use their mobile devices to contact colleagues, look for answers, access materials and tools ‘in the moment’. The benefits of desktops will be tools to do knowledge work, and there will be needs for information access, and colleague access, and collaboration, but increasingly we may want that when and where we want.
I’m thinking mobile could become the default target design, and desktop augments will be possible, versus the other way around. While you might want a desktop for big design work where screen real estate matters. For example, I’m designing diagrams on my iPad. I wouldn’t want to do it on my iPhone, but I am glad to take it with me in a smaller form-factor than a laptop. I may take back and polish on the laptop, but my new performance ecosystem is more distributed. And that’s the point.
Increasingly, we expect at least some access to our information wherever we are. (Yes, there are some folks who still eschew a mobile phone. There are people who still avoid a computer, or even electricity!) Mostly, however, we’re seeing people finding value in augmenting their capabilities digitally. And so, maybe we increasingly need to view augmentation as the baseline, and dedicated capability as the icing on the cake for specialized work.
This may be too much, but I hope you’re seeing that mobile is more than just a niche phenomenon. There are real opportunities on the table, and real benefits to be had. I’m surprised that it took so long, frankly, as I figured mobile was closer to ready-for-prime-time than virtual worlds. Now, however, while there are still compatibility problems, mobile really is ready to rock. Are you?
When to LMS
Dave Wilkins, who I admire, has taken up the argument for the LMS in a long post, after a suite of posts (including mine). I know Dave ‘gets’ the value of social learning, but also wants folks to recognize the current state of the LMS, where major players have augmented the core LMS functions with social tools, tool repositories, and more. Without doing a point-by-point argument, since Dan Pontefract has eloquently done so, and also I agree with many of the points Dave makes. I want, however, to point to a whole perspective shift that characterizes where I come from.
I earlier made two points: one is that the LMS can be valuable if it has all the features. And if you want an integrated suite. Or if you need the LMS features as part of a larger federated suite. I made the analogy to the tradeoffs between a Swiss Army knife and a toolbox. Here, you either have a tool that has all the features you need, or you pull together a suite of separate tools with some digital ‘glue’. It may be that the glue is custom code from your IT department, or one tool that integrates one or more of the functions and can integrate other tools (e.g. SharePoint, as Harold Jarche points out on a comment to a subsequent Dave post).
The argument for the former is one tool, one payment, one support location, one integrated environment. I think that may make sense for a lot of companies, particularly small ones. Realize that there are tradeoffs, however. The main one, to me, is that you’re tied to the tools provided by the vendor. They may be great, or they may not. They may have only adequate, or truly superb capabilities. And as new things are developed, you either have to integrate into their tool, or wait for them to develop that capability on their priority.
Which, again, may still be an acceptable solution if the price is right and the functionality is there. However, only if it’s organized around tasks. If it’s organized around courses, all bets are off. Courses aren’t the answer any more!
However, if it’s not organized around courses, (and Dave has suggested that a modern LMS can be a portal-organized function around performance needs), then why the #$%^&* are you calling it an LMS? Call it something else (Dan calls it a Learning, Content, & Collaboration system or LCC)!
Which raises the question of whether you can actually manage learning. I think not. You can manage courses, but not learning. And this is an important distinction, not semantics. Part of my problem is the label. It leads people to make the mistake of thinking that their function is about ‘learning’ with a small ‘l’, the formal learning. Let me elaborate.
Jane Hart developed a model for organizational learning that really captures the richness of leraning. She talks about:
- FSL – Formal Structured Learning
- IOL – Intra-Organizational Learning
- GDL – Group Directed Learning
- PDL – Personal Directed Learning
- ASL – Accidental & Serendipitous Learning
The point I want to make here is that FSL is the compliance and certification stuff that LMS’ handle well. And if that’s all you see as the role of the learning unit, you’ll see that an LMS meets your needs. If you, instead, see the full picture, you’ll likely want to look at a richer suite of capabilities. You’ll want to support performance support, and you’ll absolutely want to support communication, collaboration, and more.
The misnomer that you can manage learning becomes far more clear when you look at the broader picture!
So, my initial response to Dave is that you might want the core LMS capabilities as part of a federated solution, and you might even be willing to use what’s termed LMS software if it really is LCC or a performance ecosystem solution, and are happy with the tradeoffs. However, you might also want to look at creating a more flexible environment with ‘glue’ (still with single sign-on, security, integration, etc, if your IT group or integration tool is less than half-braindead).
But I worry that unless people are clued in, selling them (particularly with LMS label) lulls them into a false confidence. I don’t accuse Dave of that, by the way, as he has demonstrably been carrying the ‘social’ banner, but it’s a concern for the industry. And I haven’t even talked about how, if you’re still talking about ‘managing’ learning, you might not have addressed the issues of trust, value, and culture in the community you purport to support.
Interactivity & Mobile Development
A while ago, I characterized the stages of web development as:
- Web 1.0: producer-generated content, where you had to be able to manage a server and work in obscure codes
- Web 2.0: user-generated content, where web tools allowed anyone to generate web content
- Web 3.0: system-generated content, where engines or agents will custom-assemble content for you based upon what’s known about you, what context you’re in, what content’s available, etc
It occurred to me that an analogous approach may be useful in thinking about interactivity. To understand the problem, realize that there has been a long history of attempts to characterize different levels of interactivity, e.g. Rod Sims’ paper for ITFORUM, for a variety of reasons. More recently, interactivity has been proposed as a item to tag within learning object systems to differentiate objects. Unfortunately, the taxonomy has been ‘low’, ‘medium’ and ‘high’ without any parameters to distinguish between them. Very few people, without some guidance, are going to want to characterize their content as ‘low’ interactivity.
Thinking from the perspective of mobile content, it occurred to me that I see 3 basic levels of interaction. One is essentially passive: you watch a video, listen to an audio, or read a document (text potentially augmented by graphics). This is roughly equivalent to producer-generated content. The next level would be navigable content. Most specifically, it’s hyper-documents (e.g. like the web), where users can navigate to what they want. This comes into play for me on mobile, as both static content and navigable content are easily done cross-platform. I note that user-generated content through most web interfaces is technically beyond this level.
The next level is system-generated interaction, where what you’ve done has an effect on what happens next. The web is largely state-independent, though that’s changing (e.g. Amazon’s mass-customization). This is where you have some computation going on in the background, whether it’s form processing or full game interaction. And, this is where mobile falls apart. Rich computation and associated graphics are hard to do. Flash has been the lingua franca of online interactivity, supporting delivery cross-platform. However, Flash hasn’t run well on mobile devices, it is claimed, for performance reasons. Yet there is no other cross-platform environment, really. You have to compile for each platform independently.
This analysis provides 3 meaningful levels of interactivity for defining content, and indicates what is currently feasible and what still provides barriers for mobile as well. The mobile levels will change, perhaps if HTML 5 can support more powerful computation, interaction, and graphics, or if the performance problems (or perception thereof) go away. Fingers crossed!
A case for the LMS?
My Internet Time Alliance colleagues Harold Jarche and Jane Hart have been (rightly) eviscerating the LMS. Harold put up a post that the “LMS is no longer the centre of the universe“, while Jane asked “what is the future of the LMS“. Both of them are recognizing the point I make about the scope of learning in thinking about performance: it’s more than just courses, it’s the whole ecosystem.
I think that, before we completely abandon the LMS (and that’s not necessarily what they advocate), we should examine the key capabilities an LMS provides and determine whether that role can be taken up elsewhere or how it can manifest in the broader system. I see two key functions an LMS provides.
The first role is to provide access to courses: there’s one place where learners can go to sign up for face-to-face courses, or access online courses (whether to signup and then attend a synchronous event or to complete an asynchronous one). Providing access to courses is a good thing, as there are situations where formal learning is the appropriate approach.
A second role is to track learner usage and completion of courses. Again, ascertaining an individual’s capabilities is valuable, whether it be by programmed assessment, 360 evaluation or otherwise. Linking these interventions back to organizational outcomes is also valuable to determine whether the original objectives were appropriate and whether the intervention needs modification. (BTW, I’m definitely assuming for the sake of the argument that there’s an enlightened analysis focusing on meaningful workplace objectives and an enlightened design combining cognitive and emotional design into a minimal and engaging experience).
Other capabilities – authoring, communications, etc – are secondary, really. There are other ways to get those functions, so focusing on the core affordances is the appropriate perspective.
How do you provide learners with the ability to access courses? The LMS model is that the learner comes to the LMS. That’s a course-centric model. In a performance ecosystem model, we should have a learner performance-centric view, where courses, communities, resources (e.g. job aids, media files), etc are aligned to their interests, roles, and tasks. Really, performers should have custom portals!
Similarly, tracking performance should cross courses, use of resources, and community actions to look for opportunities to facilitate. We want to find ways to assist people in using the environment successfully, to augment the elements of the ecosystem, and to align it to the performance needs. This is a bigger problem, but an LMS isn’t going to solve it.
All this argues, as Jane suggests in a followup post on A Transition Path to the Future, that “It may be that you want to retain it in some cut-down form, or it may be that it is providing no real value at all, and it is a barrier to ‘learning'”. Harold similarly says in his followup post on Identifying a Collaboration Platform, that you “minimize use of the LMS”.
You could make access to formal learning available through a portal, but I think there’s an argument to have a tool for those responsible for formal learning to manage it. However, it probably should not be a performer-facing interface.
The big problem I see is that it’s too easy for the learning function in an organization to take the easy path and focus on the formal learning, and an LMS may be an enabler. If you take the Pareto rule Jay Cross (another ITA colleague) touts where we spend 80% of our money on the 20% of value people obtain in the workplace from formal learning, you may have misplaced priorities.
It is likely that the first tool you should buy is a collaboration platform, as Harold’s suggesting, and LMS capability is an afterthought or addition, rather than the core need. Truly, once people are up and performing, they need tools for accessing resources and each other. That infrastructure, like plumbing or electricity or air, is probably the most important (and potentially the best value) investment you can make.
Yes, you need to prepare the ground to seed, feed, weed, and breed the outcome, but the benefits are not only in the output, but also the demonstrable investment in employee value and success. Let an LMS be a functional tool, not an enabler of mis-focused energy, and certainly not the core of your learning technology investment. Look at the bigger picture, and budget accordingly.
May Big Q: Workplace Learning Technology 2015
The Learning Circuits Blog Big Question of the Month for May is “What will workplace learning technology look like in 2015?” This is a tough question for me, because I tend to see what could be the workplace tech if we really took advantage of the opportunities. Consequently, my predictions tend to be optimistic, as the real world has a way of not moving near as fast as one could wish. Still, I actually prefer to think on what could be the possibilities, as it’s more inspiring. Maybe I’ll answer both.
T
he opportunities on the table are immense. Mobile technologies are taking off, we’re getting real power in technology standards (and still some hiccups), and we’re crossing boundaries between reality and virtual worlds.
Smartphones are on the rise, and new portable devices (e.g. tablets) are expanding the possibilities. It’s highly plausible that we’ll have expanded the performance ecosystem to be location independent, and be providing the 4C’s in ways that allow powerful access, sharing, and collaboration.
Virtual worlds provide a different approach, where instead of augmenting reality, we’re re-contextualized in an artificial but enhanced space where capabilities that don’t exist in the real world are available to us. We can build 3D models, communicate in micro or macro spaces (within molecules or between galaxies), and open up the hidden components of real spaces. Again, we can leverage the 4C’s to go beyond courses to a fuller definition of learning.
This can be facilitated by standards. If HTML 5 coalesces as it should, we can and should be delivering rich interactivity, not just content delivery. Similarly, if we can move beyond ebook standards to capture interactivity, we can make easy marketplaces to deliver capability that is available regardless of connectivity. Virtual world standards are emerging too, and hopefully some convergence will have happened by 2015!
Also, if our backend systems progress as they can (and should), we should be able to move to Web 3.0 where instead of producers or users, the systems generate content. We can use semantic technologies to do customized delivery of information, pulling together what we know about the learner (e.g. from a competency map or learning path), about the content available (from a content model), and their tasks (from a job role) and their current context (their location and what’s on their calendar) to serve up just the right information.
This is all possible. What’s probable? We’ll have seen major progress in mobile tools, whether companies wake up or it’s just individual initiative to accessorize the brain. Virtual worlds will also be more prevalent, though not ubiquitous. Social media systems will be much more integrated into the workflow, and LMS will have become just a cog in the ecosystem, not the ecosystem. The social media will be available whether you’re in-world, in the world, or at your desk.
Semantics, however, are likely to still be nebulous. People are beginning to take advantage of powerful content systems leveraging tagging and flexible delivery, but it’s still embryonic. There’ll be more pockets, but it won’t be a groundswell yet.
I’m probably still be optimistic, but a guy can hope, and of course strive to make it so. This is what I do and where I like to play. I welcome more playmates in this great playground of opportunity.
Situated Learning Styles
I’ve been thrust back into learning styles, and saw an interesting relationship that bears repeating. Now you should know I’ve been highly critical of learning styles for at least a decade; not because I think there’s anything wrong with the concept, but because the instruments are flawed, and the implications for learning design are questionable.
This is not just my opinion; two separate research reports buttress these positions. A report from the UK surveyed 13 major and representative learning style instruments and found all with some psychometric questions. In the US, Hal Pashler led a team that concluded that there was no evidence that adapting instruction to learning styles made a difference.
Yet it seems obvious that learners differ, and different learning pedagogies would affect different learners differently. Regardless, using the best media for the message and an enlightened learning pedagogy seems best.
Even the simple question of whether to match learners to their style, or challenge them against their style has been unanswered. One of the issues that has been that much of the learning styles have been focused on cognitive aspects, yet cognitive science also recognizes two other areas: affective and conative, that is, who you are as a learner and your intentions to learn.
These two aspects, in particular the latter, could have an effect on learners. Affective, typically considered to be your personality, is best characterized by the Big 5 work to consolidate all the different personality characteristics into a unified set. It is easy to see that elements like openness and conscientiousness would have a positive effect on learning outcomes, and neuroticism could have a negative one.
Similarly, your intention to learn would have an impact. I typically think of this as your motivation to learn (whether from an intrinsic interest, a desire for achievement, or any other reason) moderated by any anxiety about learning (again, regardless whether from performance concerns, embarrassment, or other issue). It is this latter, in particular, that manifests in several instruments of interest. Naturally, I’m also sympathetic to learning skills, e.g learning to learn and domain-independent skills.
In the UK study, two relatively highly regarded instruments were those coming from Entwistle’s program of research, and another by Vermunt. Both result in four characterizations of learners: roughly undirected learners, surface or reproducing learners, strategic or application learners, and meaning/deep learners. Nicely, the work by Entwistle and Vermunt is funded research and not proprietary, and their work, instruments, and prescriptions are open.
I admit that any time I see a four element model, I’m inclined to want to put it into a quadrant model. And the emergent model from these three (each of which does include issues of motivation as well as learner skills) very much reminds me of the Situational Leadership model.
The situational leadership model talks about characterizing individual employees and adapting your leadership (really, coaching) to their stage. They have two dimensions: whether the learner needs task support and whether they need motivational support. In short, you tell unmotivated and unskilled employees what to do, but try to motivate them to get them to the stage where they’re willing but unskilled and skill them. When they’re still skilled but uncertain you support their confidence, and finally you just get out of their way!
This seems to me to be directly analogous to the learning models. If you chose two dimensions as needing learning skills support, and needing motivational support, you could come up with a nice two way model that provides useful prescriptions for learning. In particular, it seems to me to address the issue of when do you match a learners’ style, and when do you challenge; you match until the learner is confident, and then you challenge to both broaden their capabilities and to keep them engaged with challenge.
So, to keep with the result that the UK study found where most purveyors of instruments sell them and have no reason to work together, I suppose what I ought to do is create an learning assessment instrument and associated prescriptions of my own, label the categories, brand it, and flog it. How about:
Buy: for those not into it, get them doing it
Try: for those willing, get them to develop their learning skills and support the value thereof
My: have them apply those learning skills to their goals and take ownership of the skills
Fly: set them free and resource them
I reckon I’ll have to call it the Quinnstrument!
Ok, I’m not serious about flogging it, but I do think that we can start looking at learning skills, and the conative/intention to learn as important components of learning. Would you buy that?
The Great ADDIE Debate
At the eLearning Guild’s Learning Solutions conference this week, Jean Marripodi convinced Steve Acheson and myself to host a debate on the viability of ADDIE in her ID Zone. While both of us can see both sides of ADDIE, Steve uses it, so I was left to take the contrary (aligning well to my ‘genial malcontent’ nature).
This was not a serious debate, in the model of the Oxford Debating Society or anything, but instead we’d agreed that we were going to go for controversy and fun in equal measures. This was about making an entertaining and informative event, not a scientific exploration. And in that, I think we succeeded (you can review the tweet stream from attendees and some subsequent conversation). Rather than recap the debate (Gina Minks has a short piece in her overall summary of the day), I’ll recap the points:
The pros:
- ADDIE provides structured guidance for design
- ADDIE includes a focus on implementation and evaluation
- ADDIE serves as a valuable checklist to complement our idiosyncratic design habits
The cons:
- ADDIE is inherently a waterfall model, and needs patching to accommodate iterative development and rapid prototyping
- People use ADDIE too much as a crutch for design without taking responsibility for using it appropriately
- It assumes courses
The pragmatics:
Steve showed how he does take responsibility, putting evaluation in the middle and using it more flexibly. He uses Dick & Carey’s model to start with, ensuring that a course is the right solution. The fact that the initial ‘course, job aid, other problem’ analysis is not included, however, is a concern.
It also came out that having a process is a powerful argument against those who might try to press unreasonable production constraints on you. If a VP wants it done in an unreasonable time frame, or doesn’t want to allow you to question the analysis that a course is needed, you have a push back (“it’s in our process”), particularly in a process organization. You do want a process.
The Alternatives:
The obvious question came up about what would be used in place of ADDIE. I believe that ADDIE as a checklist would be a nice accompaniment to both a more encompassing and a more learning-centric approach. For the former, I showed the HPT model as a representation of a design approach considering courses as part of a larger picture. For the latter, I suggested that a focus on learning experience design would be appropriate.
Using an HPT-like approach first, to ensure that a course is the right solution, is necessary. Then, I’d focus on working backwards from the needed change (Michael Allen talked about using sketches as lightweight prototypes at the conference, and first drawing the last activity the user engaged in) thinking about creating a learning experience that develops the learner’s capability. Finally, I’d be inclined to use ADDIE as a checklist to ensure all the important components are considered, once I’d drafted an initial design (or several). ADDIE certainly may be useful in taking that design forward, through development, implementation and evaluation.
Summary
I think ADDIE falls apart most in the initial analysis, not being broad enough, and in the design process: e.g. most ID processes neglect the emotional side of the equation, despite the availability of Keller’s ARCS model (which wasn’t even in the TIP database!). Good users, like Steve, take responsibility for reframing it practically, but I’m not confident that even a majority of ADDIE use is so enabled. Consequently, I worry that ADDIE is more detrimental than good. It ensures the minimum, but it essentially prevents inspiration.
I’m willing to be wrong, but I’ve been looking at the debate on both sides for a long time. While I know that PowerPoint doesn’t kill people, people kill people, and the same is true of ADDIE, the continued reliance on it is problematic. We probably need a replacement, one that starts with a broader analysis, and then provides guidance across job aid development, course development and more, that has at core iterative and situated design, informed by the recognition of the emotional nature of human use. Anyone have one to hand? Thoughts on the above?

