Larry Irving kicked off the mLearnCon with an inspiring talk about the ways in which technology can disrupt education. His ideas about VOOCs and nanodegrees were intriguing, and wish he’d talked more about adaptive learning. A great kickoff to the event.
Curation trumps creation
In the past, it has been the role of L&D to ascertain the resources necessary to supporting performance in the organization. Finding the information, creating the resources, and making them available has often been a task that either results in training, or complements it. I want to suggest, however, that the time has changed and a new strategy may be more effective, at least in many instances.
Creating resources is hard. We’ve seen the need to revisit the principles of learning design because despite the pleas that “we know this stuff already”, there are still too many bad elearning courses out there. Similarly with job aids, there are skills involved in doing it right. Assuming those skills is a mistake.
There’s also the situation that creating resources is time consuming. The time spent doing this may be better spent in other approaches. There are plenty of needs that need to be addressed without finding more work.
On the flip side, there are now so many resources out there about so many things, that it’s not hard to find an answer. Finding good answers, of course, is certainly more problematic than just finding an answer, but there are likely answers out there.
The integration here is to start curating resources, not creating them. They might come internally, from the employees, or from external resources, but regardless of provenance, if it’s out there, it saves your resources for other endeavors.
The new mantra is Personal Knowledge Mastery, and while that’s for the individual, there’s a role for L&D here too: practicing ‘representative knowledge mastery’, as well as fostering PKM for the workforce. You should be monitoring feeds relevant to your role and those you’re responsible for facilitating. You need to practice it to be able to preach it, and you should be preaching it.
The point is to not be recreating resources that can be found, conserving your energy for those things that are business critical. One organization has suggested that they only create resources for internal culture, everything else is curated. Certainly only proprietary material should be the focus.
So, curate over create. Create when you have to, but only then. Finding good answers is more efficient than generating them.
#itashare
From Content to Experience
A number of years ago, I said that the problem for publishers was not going from text to content (as the saying goes), but from content to experience. I think elearning designers have the same problem: they are given a knowledge dump, and have to somehow transform that into an effective experience. They may even have read the Serious eLearning Manifesto, and want to follow it, but struggle with the transition or transformation. What’s a designer to do?
The problem is, designers will be told, “we need a course on this”, and given a dump of Powerpoints (PPTs), documents (PDFs), and maybe access to a subject matter expert (SME). This is all about knowledge. Even the SME, unless prompted carefully otherwise, will resort to telling you the knowledge they’ve learned, because they just don’t have access to what they know. And this, by itself, isn’t a foundation for a course. Processing the knowledge, comprehending it, presenting it, and then testing on acquisition (e.g. what rapid elearning tools make easy), isn’t going to lead to a meaningful outcome. Sorry, knowledge isn’t the same as ability to perform.
And this ignores, of course, whether this course is actually needed. Has anyone checked to see that if the skills associated with this knowledge have a connection with a real workplace performance issue? Is the performance need a result of a lack of skills? And is this content aligned to that skill? Too often folks will ask for a course on X when the barrier is something else. For instance, if the content is a bunch of knowledge that somehow you’re to magically put in someone’s head, such as product information or arbitrary rules, you’re far better off putting that information in the world than trying to put it in the head. It’s really hard to get arbitrary information in the head. But let’s assume that there is a core skill and workplace need for the sake of this discussion.
The key is determining what this knowledge actually supports doing differently. The designer needs to go through that content and figure out what individuals will be able to do that they can’t do now (that’s important), and then develop practice doing that. This is so important that, if what they’ll be able to do differently, isn’t there, there should be push back. While you can talk to the SME (trying to get them to talk in terms of decisions they can make instead of knowledge), you may be better off inferring the decisions and then verifying and refining with the SME. If you have access to several SMEs, better yet get them in a room together and just facilitate until they come up with the core decisions, but there are many situations where that’s not feasible.
Once you have that key decision, the application of the skill in context, you need to create situations where learners can practice using it. You need to create scenarios where these decisions will play out. Even just better written multiple choice questions that have: story setting, situation precipitating decision, decision alternatives that are ways in which learners might go wrong, consequences of the decisions, and feedback. These practice attempts are the core of a meaningful learning experience. And there’s even evidence that putting problems up front or at core is a valuable practice. You also want to have sufficient practice not just ’til they get it right, but until they have a high likelihood of not getting it wrong.
One thing that might not be in the PDFs and PPTs are examples. It’s helpful to get colorful examples of someone using information to successfully solve a problem, and also cases where they misapplied it and failed. Your SME should be able to help you here, telling you engaging stories of wins and losses. They may be somewhat resistant to the latter; worst case have them tell them about someone else.
The content in the PDFs and PPTs then gets winnowed down into just the resource material that helps the learner actually able to do the task, to successfully make the decision. Consider having the practice set in a story, and the content is available through the story environment (e.g. casebooks on the shelves for examples, a ‘library’ for concepts). But even if you present the (minimized) content and then have practice, you’ve shifted from knowledge dump/test to more of a flow of experience. The suite of meaningful practice, contextualized well and made meaningful with a wee bit of exaggeration and careful alignment with learner’s awareness, is the essence of experience.
Yes, there’s a bit more to it than that, but this is the core: focus on do, not dump. And, once you get in the habit, it shouldn’t take longer, it just takes a change in thinking. And even if it does, the dump approach isn’t liable to lead to any meaningful learning, so it’s a waste of time anyway. So, create experiences, not content.
Setting Story
I’ve been thinking about the deep challenge of motivating uninterested learners. To me, at least part of that is making the learning of intrinsic interest. And one of those elements is practice, and this is arguably the most important element to making learning work. So how to do we make practice intrinsically interesting?
One of the challenging but important components of designing meaningful practice is choosing a context in which that practice is situated. It’s really about finding a story line that makes the action meaningful to both the learner and the learning. It’s creative (and consequently fun), but it’s also not intrinsically obvious (which I’ve learned after trying to teach it in both game design and advanced ID workshops). There are heuristics to be followed (there’s no guaranteed formula except brainstorm, winnow, trial, and refine), however, that can be useful.
While Subject Matter Experts (SMEs) can be the bane of your existence while setting learning goals (they have conscious access to no more than 30% of what they do, so they tend to end up reciting what they know, which they do have access to), they can be very useful when creating stories. There’s a reason why they’ve spent the requisite time to become experts in the field, and that’s an aspect we can tap into. Find out why it’s of interest to them. In one instance, when asking experts about computer auditing, a colleague found that auditors found it like playing detective, tracking back to find the error. It’s that sort of insight upon which a good game or practice exercise can hinge.
One of the tricks to work with SMEs is to talk about decisions. I argue that what is most likely to make a difference to organizations is that people make better decisions, and I also believe that using the language of decisions helps SMEs focus on what they do, not what they know. Between your performance gap analysis of the situation, and expert insight into what decisions are key, you’re likely to find the key performances you want learners to practice.
You also want to find out all the ways learners go wrong. Here you may well hear instructors and/or SMEs say “no matter what we do, they always…”. And that’s the things you want to know, because novices don’t tend to make random errors. Yes, there’s some, owing to our cognitive architecture (it’s adaptive), which is why it’s bad to expect people to do rote things, but it’s a small fraction of mistakes. Instead, learners make patterned mistakes based upon mistakes in their conceptualizations of the performance, aka misconceptions. And you want to trap those because you’ll have a chance to remediate them in the learning context. And they make the challenge more appropriately tuned.
You also need the consequences of both the right choice and the misconceptions. Even if it’s just a multiple choice question, you should show what the real world consequence is before providing the feedback about why it’s wrong. It’s also the key element in scenarios, and building models for serious games.
Then the trick is to ask SMEs about all the different settings in which these decisions embed. Such decisions tend to travel in packs, which is why scenarios are better practice than simple multiple choice, just as scenario-based multiple choice trumps knowledge test. Regardless, you want to contextualize those decisions, and knowing the different settings that can be used gives you a greater palette to choose from.
Finally, you’ll want to decide how close you want the context to be to the real context. For certain high-stakes and well-defined tasks, like flying planes or surgery, you’ll want them quite close to the real situation. In other situations, where there’s more broad applicability and less intrinsic interest (perhaps accounting or project management), you may want a more fantastic setting that facilitates broader transfer.
Exaggeration is a key element. Knowing what to exaggerate and when is not yet a science, but the rule of thumb is leave the core decisions to be based upon the important variables, but the context can be raised to increase the importance. For example, accounting might not be riveting but your job depends on it. Raising the importance of the accounting decision in the learning experience will mimic the importance, so you might be accounting for a mob boss who’ll terminate your existence if you don’t terminate the discrepancy in his accounts! Sometimes exaggeration can serve a pedagogical purpose as well, such as highlighting certain decisions that are rare in real life but really important when they occur. In one instance, we had asthma show up with a 50% frequency instead of the usual ~15%, as the respiratory complications that could occur required specific approaches to address.
Ultimately, you want to choose a setting in which to embed the decisions. Just making it abstract decreases the impact of the learning, and making it about knowledge, not decisions, will render it almost useless, except for those rare bits of knowledge that have to absolutely be in the head. You want to be making decisions using models, not recalling specific facts. Facts are better off put in the world for reference, except where time is too critical. And that’s more rare than you’d expect.
This may seem like a lot of work, but it’s not that hard, with practice. And the above is for critical decisions. In many cases, a good designer should be able to look at some content and infer what the decisions involved should be. It’s a different design approach then transforming knowledge into tests, but it’s critical for learning. Start working on your practice items first, aligned with meaningful objects, and the rest will flow. That’s my claim, what say you?
Getting contextual
For the current ADL webinar series on mobile, I gave a presentation on contextualizing mobile in the larger picture of L&D (a natural extension of my most recent books). And a question came up about whether I thought wearables constituted mobile. Naturally my answer was yes, but I realized there’s a larger issue, one that gets meta as well as mobile.
So, I’ve argued that we should be looking at models for guiding our behavior. That we should be creating them by abstracting from successful practices, we should be conceptualizing them, or adopting them from other areas. A good model, with rich conceptual relationships, provides a basis for explaining what has happened, and predicting what will happen, giving us a basis for making decisions. Which means they need to be as context-independent as possible.
So, for instance, when I developed the mobile models I use, e.g. the 4C‘s and the applications of learning (see figure), I deliberately tried to create an understanding that would transcend the rapid changes that are characterizing mobile, and make them appropriately recontextualizable.
In the case of mobile, one of the unique opportunities is contextualization. That means using information about where you are, when you are, which way you’re looking, temperature or barometric pressure, or even your own state: blood pressure, blood sugar, galvanic skin response, or whatever else skin sensors can detect.
To put that into context (see what I did there): with desktop learning, augmenting formal could be emails that provide new examples or practice that spread out over time. With a smartphone you can do the same, but you could also have a localized information so that because of where you were you might get information related to a learning goal. With a wearable, you might get some information because of what you’re looking at (e.g. a translation or a connection to something else you know), or due to your state (too anxious, stop and wait ’til you calm down).
Similarly for performance support: with a smartphone you could take what comes through the camera and add it onto what shows on the screen; with glasses you could lay it on the visual field. With a watch or a ring, you might have an audio narration. And we’ve already seen how the accelerometers in fit bracelets can track your activity and put it in context for you.
Social can not only connect you to who you need to know, regardless of device or channel, but also signal you that someone’s near, detecting their face or voice, and clue you in that you’ve met this person before. Or find someone that you should meet because you’re nearby.
All of the above are using contextual information to augment the other tasks you’re doing. The point is that you map the technology to the need, and infer the possibilities. Models are a better basis for elearning, too so that you teach transferable understandings (made concrete in practice) rather than specifics that can get outdated. This is one of the elements we placed in the Serious eLearning Manifesto, of course. They’re also useful for coaching & mentoring as well, as for problem-solving, innovating, and more.
Models are powerful tools for thinking, and good ones will support the broadest possible uses. And that’s why I collect them, think in terms of them, create them, and most importantly, use them in my work. I encourage you to ensure that you’re using models appropriately to guide you to new opportunities, solutions, and success.
Peeling the onion
I’ve been talking a bit recently about deepening formal design, specifically to achieve learning that’s flexible, persistent, and develops the learner’s abilities to become self-sustaining in work and life. That is, not just for a course, but for a curriculum. And it’s more than just what we talked about in the Serious eLearning Manifesto, though of course it starts there. So, to begin with, it needs to start with meaningful objectives, provide related practice, and be trialed and developed, but there’s more, there are layers of development that wrap around the core.
One element I want to suggest is important is also in the Manifesto, but I want to push a bit deeper here. I worked to put in that the elements behind, say, a procedure or a task, that you apply to problems, are models or concepts. That is, a connected body of conceptual relationships that tie together your beliefs about why it should be done this way. For example, if you’ve a procedure or process you want people to follow, there is (or should be) a rationale behind it.
And you should help learners discover and see the relationships between the model and the steps, through examples and the feedback they get on practice. If they can internalize the understanding behind steps, they are better prepared for the inevitable changes to the tools they use, the materials they work on, or the process changes what will come from innovation. Training them on X, when X will ultimately shift to Y, isn’t as helpful unless you help them understand the principles that led to performance on X and will transfer to Y.
Another element is that the output of the activities should create scrutable deliverables and also annotate the thoughts behind the result. These provide evidence of the thinking both implicit and explicit, a basis for mentors/instructors to understand what’s good, and what still may need to be addressed, tin the learner’s thinking. There’s also the creation of a portfolio of work which belongs to the learner and can represent what they are capable of.
Of course, the choices of activities for the learner initially, and the design of them to make them engaging, by being meaningful to the learner in important ways, is another layer of sophistication in the design. It can’t just be that you give the traditional boring problems, but instead the challenges need to be contextualized. More than that (which is already in the Manifesto), you want to use exaggeration and story to really make the challenges compelling. Learning should be hard fun.
Another layer is that of 21st Century skills (for examples, the SCANS competencies). These can’t be taught separately, they really need to manifest across whatever domain learnings you are doing. So you need learners to not just learn concepts, but apply those concepts to specific problems. And, in the requirements of the problem, you build in opportunities to problem-solve, communicate, collaborate, e.g. all the foundational and workplace skills. They need to reappear again and again and be assessed (and developed) separately.
Ultimately, you want the learner to be taking on responsibility themselves. Later assignments should include the learner being given parameters and choosing appropriate deliverables and formats for communication. And this requires and additional layer, a layer of annotation on the learning design. The learners need to be seeing why the learning was so designed, so that they can internalize the principles of good design and so become self-improving learners. You, for example, in reading this far, have chosen to do this as part of your own learning, and hopefully it’s a worthwhile investment. That’s the point; you want learners to continue to seek out challenges, and resources to succeed, as part of their ongoing self-development, and that comes by having seen learning design and been handed the keys at some point on the journey, with support that’s gradually faded.
The nuances of this are not trivial, but I want to suggest that they are doable. It’s a subtle interweaving, to be sure, but once you’ve got your mind around it (with scaffolded practice :), my claim is that it can be done, reliably and repeatedly. And it should. To do less is to miss some of the necessary elements for successful support of an individual to become the capable and continually self-improving learner that we need.
I touched on most of this when I was talking about Activity-Based Learning, but it’s worthwhile to revisit it (at least for me :).
Facilitating Innovation
One of the things that emerged at the recent A(S)TD conference was that a particular gap might exist. While there are resources about learning design, performance support design, social networking, and more, there’s less guidance about facilitating innovation. Which led me to think a wee bit about what might be involved. Here’s a first take.
So, first, what are the elements of innovation? Well, whether you listen to Stephen Berlin Johnson on the story of innovation, or Keith Sawyer on ways to foster innovation, you’ll see that innovation isn’t individual. In previous work, I looked at models of innovation, and found that either you mutated an existing design, or meld two designs together. Regardless, it comes from working and playing well together.
The research suggests that you need to make sure you are addressing the right problem, diverge on possible solutions via diverse teams under good process, create interim representations, test, refine, repeat. The point being that the right folks need to work together over time.
The barriers are several. For one, you need to get the cultural elements right: welcoming diversity, openness to new ideas, safe to contribute, and time for reflection. Without being able to get the complementary inputs, and getting everyone to contribute, the likelihood of the best outcome is diminished.
You also shouldn’t take for granted that everyone knows how to work and play well together. Someone may not be able to ask for help in effective ways, or perhaps more likely, others may offer input in ways that minimize the likelihood that they’ll be considered. People may not use the right tools for the job, either not being aware of the full range (I see this all the time), or just have different ways of working. And folks may not know how to conduct brainstorming and problem-solving processes effectively (I see this as well).
So, the facilitation role has many opportunities to increase the quality of the outcome. Helping establish culture, first of all, is really important. A second role would be to understand and promote the match of tools to need. This requires, by the way, staying on top of the available tools. Being concrete about learning and problem-solving processes, and educating them and looking for situations that need facilitation, is another role Both starting up front and educating folks before these skills are needed are good, and then monitoring for opportunities to tune those skills are valuable. Finally, developing process facilitation skills, serving in that role or developing the skills, or both, are critical.
Innovation isn’t an event, it’s a process, and it’s something that I want P&D (Learning & Development 2.0 :) to be supporting. The organization needs it, and who better?
#itashare
What do elearning users say?
Towards Maturity is a UK-based but global initiative looking at organizations use of technology for learning. While not as well known in the US, they’ve been conducting research benchmarking on what organizations are doing and trying to provide guidance as well. I even put their model as an appendix in the forthcoming book on reforming L&D. So I was intrigued to see the new report they have just released.
The report, a survey of 2000 folks in a variety of positions in organizations, asks what they think about elearning, in a variety of ways. The report covers a variety of aspects of how people learn: when, where, how, and their opinion of elearning. The report is done in an appealing infographic-like style as well.
What intrigued me was the last section: are L&D teams tuned into the learner voice. The results are indicative. This section juxtaposes what the report heard from learners versus what L&D has reported in a previous study. Picking out just a few:
- 88% of staff like self-paced learning, but only 23% of L&D folks believe that learners have the necessary confidence
- 84% are willing to share with social media, but only 18% of L&D believe their staff know how
- 43% agree that mobile content is useful (or essential), but only 15% of L&D encourage mlearning
This is indicative of a big disconnect between L&D and the people they serve. This is why we need the revolution! There’s lots more interesting stuff in this report, so I strongly recommend you check it out.
How do we mLearn?
As preface, I used to teach interface design. My passion was still learning technology (and has been since I saw the connection as an undergraduate and designed my own major), but there’re strong links between the two fields in terms of design for humans. My PhD advisor was a guru of interface design and the thought was “any student of his should be able to teach interface design”. And so it turned out. So interface design continues to be an interest of mine, and I recognize the importance. More so on mobile, where there are limitations on interface real estate, so more cleverness may be required.
Stephen Hoober, who I had the pleasure of sharing a stage with at an eLearning Guild conference, is a notable UI design expert with a speciality in mobile. He had previously conducted a research project examining how people actually hold their phones, as opposed to anecdotes. The Guild’s Research Director, Patti Schank, obviously thought this interesting enough to extend, because they’ve jointly published the results of the initial report and subsequent research into tablets as well. And the results are important.
The biggest result, for me, is that people tend to use phones while standing and walking, and tablets while sitting. While you can hold a tablet with two hands and type, it’s hard. The point is to design for supported use with a tablet, but for handheld use with a phone. Which actually does imply different design principles.
I note that I still believe tablets to be mobile, as they can be used naturally while standing and walking, as opposed to laptops. Though you can support them, you don’t have to. (I’m not going to let the fact that there are special harnesses you can buy to hold tablets while you stand, for applications like medical facilities dissuade me, my mind’s made up so don’t confuse me :)
The report goes into more details, about just how people hold it in their hands (one handed w/ thumb, one hand holding, one hand touching, two hands with two thumbs, etc), and the proportion of each. This has impact on where on the screen you put information and interaction elements.
Another point is the importance of the center for information and the periphery for interaction, yet users are more accurate at the center, so you need to make your periphery targets larger and easier to hit. Seemingly obvious, but somehow obviousness doesn’t seem to hold in too much of design!
There is a wealth of other recommendations scattered throughout the report, with specifics for phones, small and large tablets, etc, as well as major takeaways. For example the implication from the fact that tablets are often supported means that more consideration of font size is needed than you’d expect!
The report is freely available on the Guild site in the Research Library (under the Content>Research menu). Just in time for mLearnCon!
Can we jumpstart new tech usage?
It’s a well-known phenomena that new technologies get used in the same ways as old technologies until their new capabilities emerge. And this is understandable, if a little disappointing. The question is, can we do better? I’d certainly like to believe so! And a conversation on twitter led me to try to make the case.
So, to start with, you have to understand the concept of affordances, at least at a simple level. The notion is that objects in the world support certain action owing to the innate characteristics of the object (flat horizontal surfaces support placing things on them, levers afford pushing and pulling, etc). Similarly, interface objects can imply their capabilities (buttons for clicking, sliders for sliding). They can be conveyed by visual similarity to familiar real-world objects, or be completely new (e.g. a cursor).
One of the important concepts is whether the affordance is ‘hidden’ or not. So, for instance, on iOS you can have meaningful differences between one, two, three, and even four-fingered swipes. Unless someone tells you about it, however, or you discover it randomly (unlikely), you’re not likely to know it. And there’re now so many that they’re hard to remember. There are many deep arguments about affordances, and they’re likely important but they can seem like ‘angels dancing on the head of a pin’ arguments, so I’ll leave it at this.
The point here being that technologies have affordances. So, for example, email allows you to transmit text communications asynchronously to a set group of recipients. And the question is, can we anticipate and leverage the properties and skip (or minimize) the stumbling beginnings.
Let me use an example. Remember the Virtual Worlds bubble? Around 2003, immersive learning environments were emerging (one of my former bosses went to work for a company). And around 2006-2009 they were quite the coming thing, and there was a lot of excitement that they were going to be the solution. Everyone would be using them to conduct business, and folks would work from desktops connecting to everyone else. Let me ask: where are they now?
The Gartner Hype Cycle talks about the ‘Peak of Inflated Expectations’ and then the ‘Trough of Disillusionment’, followed by the ‘Slope of Enlightenment’ until you reach the ‘Plateau of Productivity’ (such vibrant language!). And what I want to suggest is that the slope up is where we realize the real meaningful affordances that the technology provides.
So I tried to document the affordances and figure out what the core capabilities were. It seemed that Virtual Worlds really supported two main points: being inherently 3D and being social. Which are important components, no argument. On the other hand, they had two types of overhead, the cognitive load of learning them, and the technological load of supporting them. Which means that their natural niche would be where 3D would be inherently valuable (e.g. spatial models or settings, such as refineries where you wanted track flows), and where social would also be critical (e.g. mentoring). Otherwise there were lower-cost ways to do either one alone.
Thus, my prediction would be that those would be the types of applications that’d be seen after the bubble burst and we’d traversed the trough. And, as far as I know, I got it right. Similarly, with mobile, I tried to find the core opportunities. And this led to the models in the Designing mLearning book.
Of course, there’s a catch. I note that my understanding of the capabilities of tablets has evolved, for instance. Heck, if I could accurately predict all the capabilities and uses of a technology, I would be running venture capital. That said, I think that I can, and more importantly, we can, make a good initial stab. Sure, we’ll miss some things (I’m not sure I could’ve predicted the boon that Twitter has become), but I think we can do better than we have. That’s my claim, and I’m sticking to it (until proved wrong, at least ;).
