George Siemens kicked off the EDGEX conference with a broad reaching and insightful review of the changes in higher education.
Reimagining Learning
On the way to the recent Up To All Of Us unconference (#utaou), I hadn’t planned a personal agenda. However, I was going through the diagrams that I’d created on my iPad, and discovered one that I’d frankly forgotten. Which was nice, because it allowed me to review it with fresh eyes, and it resonated. And I decided to put it out at the event to get feedback. Let me talk you through it, because I welcome your feedback too.
Up front, let me state at least part of the motivation. I’m trying to capture rethinking about education or formal learning. I’m tired of anything that allows folks to think knowledge dump and test is going to lead to meaningful change. I’m also trying to ‘think out loud’ for myself. And start getting more concrete about learning experience design.
Let me start with the second row from the top. I want to start thinking about a learning experience as a series of activities, not a progression of content. These can be a rich suite of things: engagement with a simulation, a group project, a museum visit, an interview, anything you might choose for an individual to engage in to further their learning. And, yes, it can include traditional things: e.g. read this chapter.
This, by the way, has a direct relation to Project Tin Can, a proposal to supersede SCORM, allowing a greater variety of activities: Actor – Verb – Object, or I – did – this. (For all I can recall, the origin of the diagram may have been an attempt to place Tin Can in a broad context!)
Around these activities, there are a couple of things. For one, content is accessed on the basis of the activities, not the other way around. Also, the activities produce products, and also reflections.
For the activities to be maximally valuable, they should produce output. A sim use could produce a track of the learner’s exploration. A group project could provide a documented solution, or a concept-expression video or performance. An interview could produce an audio recording. These products are portfolio items, going forward, and assessable items. The assessment could be self, peer, or mentor.
However, in the context of ‘make your thinking visible’ (aka ‘show your work’), there should also be reflections or cognitive annotations. The underlying thinking needs to be visible for inspection. This is also part of your portfolio, and assessable. This is where, however, the opportunity to really recognize where the learner is, or is not, getting the content, and detect opportunities for assistance.
The learner is driven to content resources (audios, videos, documents, etc) by meaningful activity. This in opposition to the notion that content dump happens before meaningful action. However, prior activities can ensure that learners are prepared to engage in the new activities.
The content could be pre-chosen, or the learners could be scaffolded in choosing appropriate materials. The latter is an opportunity for meta-learning. Similarly, the choice of product could be determined, or up to learner/group choice, and again an opportunity for learning cross-project skills. Helping learners create useful reflections is valuable (I recall guiding honours students to take credit for the work they’d done; they were blind to much of the own hard work they had put in!).
When I presented this to the groups, there were several questions asked via post-its on the picture I hand-drew. Let me address them here:
What scale are you thinking about?
This unpacks. What goes into activity design is a whole separate area. And learning experience design may well play a role beneath this level. However, the granularity of the activities is at issue. I think about this at several scales, from an individual lesson plan to a full curriculum. The choice of evaluation should be competency-based, assessed by rubrics, even jointly designed ones. There is a lot of depth that is linked to this.
How does this differ from a traditional performance-based learning model?
I hadn’t heard of performance-based learning. Looking it up, there seems considerable overlap. Also with outcome-based learning, problem-based learning, or service learning, and similarly Understanding By Design. It may not be more, I haven’t yet done the side-by-side. It’s scaling it up , and arguably a different lens, and maybe more, or not. Still, I’m trying to carry it to more places, and help provide ways to think anew about instruction and formal education.
An interesting aside, for me, is that this does segue to informal learning. That is, you, as an adult, choose certain activities to continue to develop your ability in certain areas. Taking this framework provides a reference for learners to take control of their own learning, and develop their ability to be better learners. Or so I would think, if done right. Imagine the right side of the diagram moving from mentor to learner control.
How much is algorithmic?
That really depends. Let me answer that in conjunction with this other comment:
Make a convert of this type of process out of a non-tech traditional process and tell that story…
I can’t do that now, but one of the attendees suggested this sounded a lot like what she did in traditional design education. The point is that this framework is independent of technology. You could be assigning studio and classroom and community projects, and getting back write-ups, performances, and more. No digital tech involved.
There are definite ways in which technology can assist: providing tools for content search, and product and reflection generation, but this is not about technology. You could be algorithmic in choosing from a suite of activities by a set of rules governing recommendations based upon learner performance, content available, etc. You could also be algorithmic in programming some feedback around tech-traversal. But that’s definitely not where I’m going right now.
Similarly, I’m going to answer two other questions together:
How can I look at the path others take? and How can I see how I am doing?
The portfolio is really the answer. You should be getting feedback on your products, and seeing others’ feedback (within limits). This is definitely not intended to be individual, but instead hopefully it could be in a group, or at least some of the activities would be (e.g. communing on blog posts, participating in a discussion forum, etc). In a tech-mediated environment, you could see others’ (anonymized) paths, access your feedback, and see traces of other’s trajectories.
The real question is: is this formulation useful? Does it give you a new and useful way of thinking about designing learning, and supporting learning?
70:20:10 Tech
At the recent Up To All Of Us event (#utaou), someone asked about the 70:20:10 model. As you might expect, I mentioned that it’s a framework for thinking about supporting people at work, but it also occurred to me that there might be a reason folks have not addressed the 90, because, in the past, there might have been little that they could do. But that’s changed.
In the past, other than courses, there was little at could be done except providing courses on how to coach, and making job aids. The technology wasn’t advanced enough. But that’s changed.
What has changed are several things. One is the rise of social networking tools: blogs, micro-blogs, wikis, and more. The other is the rise of mobile. Together, we can be supporting the 90 in fairly rich ways.
For the 20, coaching and mentoring, we can start delivering that wherever needed, via mobile. Learners can ask for, or even be provided, support more closely tied to their performance situations regardless of location. We can also have a richer suite of coaching and mentoring happening through Communities of Practice, where anyone can be a coach or mentor, and be developed in those roles, too. Learner activity can be tracked, as well, leaving traces for later review.
For the 70, we can first of all start providing rich job aids wherever and whenever, including a suite of troubleshooting information and even interactive wizards. We also can have help on tap freed of barriers of time and distance. We can look up information as well, if our portals are well-designed. And we can find people to help, whether information or collaboration.
The point is that we no longer have limits in the support we can provide, so we should stop having limits in the help we *do* provide.
Yes, other reasons could still also be that folks in the L&D unit know how to do courses, so that’s their hammer making everything look like a nail, or they don’t see it as their responsibility (to which I respond “Who else? Are you going to leave it to IT? Operations?”). That *has* to change. We can, and should, do more. Are you?
Making it visible and viral
On a recent client engagement, the issue was spreading an important initiative through the organization. The challenges were numerous: getting consistent uptake across management and leadership, aligning across organizational units, and making the initiative seem important and yet also doable in a concrete way. Pockets of success were seen, and these are of interest.
For one, the particular unit had focused on making the initiative viral, and consequently had selected and trained appropriate representatives dispersed through their organization. These individuals were supported and empowered to incite change wherever appropriate. And they were seeing initial signs of success. The lesson here is that top down is not always sufficient, and that benevolent infiltration is a valuable addition.
The other involvement was also social, in that the approach was to make the outcomes of the initiative visible. In addition to mantras, graphs depicting status were placed in prominent places, showing current status. Further, suggestions for improvement were not only solicited, but made visible and their status tracked. Again, indicators were positive on these moves.
The point is that change is hard, and a variety of mechanisms may be appropriate. You need to understand not just what formal mechanisms you have, but also how people actually work. I think that too often, planning fails to anticipate the effects of inertia, ambivalence, and apathy. More emotional emphasis is needed, more direct connection to individual outcomes, and more digestion into manageable chunks. This is true for elearning, learning, and change.
In looking at attitude change, and from experience, I recognize that even if folks are committed to change, it can be easy to fall back into old habits without ongoing support. Confusion in message, lack of emotional appeal, and idiosyncratic leadership only reduce the likelihood. If it’s important, get alignment and sweat the details. If it’s not, why bother?
Social media budget line item?
Where does social media fit in the organization? In talking with a social media entrepreneur over beers the other day, he mentioned that one of his barriers in dealing with organizations was that they didn’t have a budget line for social media software.
That may sound trivial, but it’s actually a real issue in terms of freeing up the organization. In one instance, it had been the R&D organization that undertook the cost. In another case, the cost was attributed to the overhead incurred in dealing with a merger. These are expedient, but wrong.
It’s increasingly obvious that it’s more than just a ‘nice to have’. As I’ve mentioned previously, innovation is the only true differentiator. If that’s the case, then social media is critical. Why? Because the myth of individual innovation is busted, as clearly told by folks like Keith Sawyer and Steven Berlin Johnson. So, if it’s not individual, it’s social, and that means we need to facilitate conversations.
If we want people to be able to work together to create new innovations, we don’t want to leave it to chance. In addition to useful architectural efforts that facilitate in person interactions, we want to put in place the mechanisms to interact without barriers of time or distance. Which means, we need a social media system.
It’s pretty clear that if you align things appropriately: culture, vision, tools, that you get better outcomes. And, of course, culture isn’t a line item, and vision’s a leadership mandate. But tools, well, they are a product/service, and need resources.
Which brings us to the initial point: where does this responsibility lie? Despite my desire for folks who are most likely to understand facilitating learning (though that’s sadly unlikely in too many L& D departments), it could be IT, operations, or as mentioned above, R&D. The point is, this is arguably one of the most important investments in the organization, and typically not one of the most expensive (making it the best deal going!). Yet there’s not a unified obvious home!
There are worries if it’s IT. They are, or should be, great at maintaining network uptime, but don’t really understand learning. Nor do the other groups, and yet facilitating the discussion in the network is the most important external role. But who funds it?
Let’s be real; no one wants to have to own the cost when there’re other things they’re already doing. But I’d argue that it’s the best investment an L&D organization could make, as it will likely have the biggest impact on the organization. Well, if you really are looking to move needles on key business metrics. So, where do you think it could, and should reside?
Sharing Failure
I’ve earlier talked about the importance of failure in learning, and now it’s revealed that Apple’s leadership development program plays that up in a big way. There are risks in sharing, and rewards. And ways to do it better and worse.
In an article in Macrumors (obviously, an Apple info site), they detail part of Adam Lashinsky’s new Inside Apple book that reports on Apple executive development program. Steve Jobs hired a couple of biz school heavyweights to develop the program, and apparently “Wherever possible the cases shine a light on mishaps…”. They use examples from other companies, and importantly, Apple’s own missteps.
Companies that can’t learn from mistakes, their own and others’, are doomed to repeat them. In organizations where it’s not safe to share failures, where anything you say can and will be held against you, the same mistakes will keep getting made. I’ve worked with firms that have very smart people, but their culture is so aggressive that they can’t admit errors. As a consequence, the company continues to make them, and gets in it’s own way. However, you don’t want to celebrate failure, but you do want to tolerate it. What can you do?
I’ve heard a great solution. Many years ago now, at the event that led to Conner’s & Clawson’s Creating a Learning Culture, one small company shared their approach: they ring a bell not when the mistake is made, but when the lesson’s learned. They’re celebrating – and, importantly, sharing – the learning from the event. This is a beautiful idea, and a powerful opportunity to use social media when the message goes beyond a proximal group.
There’s a lot that goes on behind this, particularly in terms of having a culture where it’s safe to make mistakes Culture eats strategy for breakfast, as the saying goes.. What is a problem is making the same mistake, or dumb mistakes. How do you prevent the latter? By sharing your thinking, or thinking out loud, as you develop your planned steps.
Now, just getting people sharing isn’t necessarily sufficient. Just yesterday (as I write), Jane Bozarth pointed me towards an article in the New Yorker (at least the abstract thereof) that argues why brainstorming doesn’t work. I’ve said many times that the old adage “the room is smarter than the smartest person in the room” needs a caveat: if you manage the process right. There are empirical results that guide what works from what doesn’t, such as: having everyone think on their own first; then share; focus initially on divergence before convergence; make a culture where it’s safe, even encouraged, to have a diversity of viewpoints; etc.
No one says getting a collaborating community is easy, but like anything else, there are ways to do it, and do it right. And here too, you can learn from the mistakes of others…
Will tablets diverge?
After my post trying to characterize the differences between tablets and mobile, Amit Garg similarly posted that tablets are different. He concludes that “a conscious decision should be made when designing tablet learning (t-learning) solutions”, and goes further to suggest that converting elearning or mlearning directly may not make the most sense. I agree.
As I’ve suggested, I think the tablet’s not the same as a mobile phone. It’s not always with you, and consequently it’s not ready for any use. A real mobile device is useful for quick information bursts, not sustained attention to the device. (I’ll suggest that listening to audio, whether canned or a conversation, isn’t quite the same, the mobile device is a vehicle, not the main source of interaction.) Tablets are for more sustained interactions, in general. While they can be used for quick interactions, the screen size supports more sustained interactions.
So when do you use tablets? I believe they’re valuable for regular elearning, certainly. While you would want to design for the touch screen interface rather than mimic a mouse-driven interaction. Of course, I believe you also should not replicate the standard garbage elearning, and take advantage of rethinking the learning experience, as Barbara Means suggested in the SRI report for the US Department of Education, finding that eLearning was now superior to F2F. It’s not because of the medium itself, but because of the chance to redesign the learning.
So I think that tablets like the iPad will be great elearning platforms. Unless the task is inherently desktop, the intimacy of the touchscreen experience is likely to be superior. (Though more than Apple’s new market move, the books can be stunning, but they’re not a full learning experience.) But that’s not all.
Desktops, and even laptops don’t have the portability of a tablet. I, and others, find that tablets are taken more places than laptops. Consequently, they’re available for use as performance support in more contexts than laptops (and not as many as smart or app phones). I think there’ll be a continuum of performance support opportunities, and constraints like quantity of information (I’d rather look at a diagram on a tablet) constraints of time & space in the performance context, as well as preexisting pressures for pods (smartphone or PDA) versus tablets will determine the solution.
I do think there will be times when you can design performance support to run on both pads and pods, and times you can design elearning for both laptop and tablet (and tools will make that easier), but you’ll want to do a performance context analysis as well as your other analyses to determine what makes sense.
Changing the Book game
I was boarding a plane away from home as Apple’s announcement was happening, so I haven’t had the chance to dig into the details as I normally would, but just the news itself shows Apple is taking on yet another industry. What Apple did to the music industry is a closer analogy to what is happening here than what they did to the phone industry, however.
As Apple recreated the business of music publishing, they’re similarly shifting textbook publishing. They’ve set a price cap (ok, perhaps just for high school, to begin), and a richer target product. In this case, however, they’re not revolutionizing the hardware, but the user experience, as their standard has a richer form of interaction (embedded quizzes) than the latest ePub standard they’re building upon. This is a first step towards the standard I’ve argued for, with rich embedded interactivity (read sims/games).
Apple has also democratized the book creation business, with authoring tools for anyone. They have kind of done that with GarageBand, but this is easier. Publishers will have the edge on homebrew for now, with a greater infrastructure to accommodate different state standards, and media production capabilities or relationships. That may change, however.
Overall, it will be interesting to see how this plays out. Apple, once again making life fun.
Stop creating, selling, and buying garbage!
I was thinking today (on my plod around the neighborhood) about how come we’re still seeing so much garbage elearning (and frankly, I had a stronger term in mind). And it occurred to me that their are multitudinous explanations, but it’s got to stop.
One of the causes is unenlightened designers. There are lots of them, for lots of reasons: trainers converted, lack of degree, old-style instruction, myths, templates, the list goes on. You know, it’s not like one dreams of being an instructional designer as a kid. This is not to touch on their commitment, but even if they did have courses, they’d likely still not be exposed to much about the emotional side, for instance. Good learning design is not something you pick up in a one week course, sadly. There are heuristics (Cat Moore’s Action mapping, Julie Dirksen’s new book), but the necessary understanding of the importance of the learning design isn’t understood and valued. And the pressures they face are overwhelming if they did try to change things.
Because their organizations largely view learning as a commodity. It’s seen as a nice to have, not as critical to the business. It’s about keeping the cost down, instead of looking at the value of improving the organization. I hear tell of managers telling the learning unit “just do that thing you do” to avoid a conversation about actually looking at whether a course is the right solution, when they do try! They don’t know how to hire the talent they really need, it’s thin on the ground, and given it’s a commodity, they’re unlikely to be willing to really develop the necessary competencies (even if they knew what they are).
The vendors don’t help. They’ve optimized to develop courses cost-effectively, since that’s what the market wants. When they try to do what really works, they can’t compete on cost with those who are selling nice looking content, with mindless learning design. They’re in a commodity market, which means that they have to be efficiency oriented. Few can stake out the ground on learning outcomes, other than an Allen Interactions perhaps (and they’re considered ‘expensive’).
The tools are similarly focused on optimizing the efficiency of translating PDFs and Powerpoints into content with a quiz. It’s tarted up, but there’s little guidance for quality. When it is, it’s old school: you must have a Bloom’s objective, and you must match the assessment to the objective. That’s fine as far as it goes, but who’s pushing the objectives to line up with business goals? Who’s supporting aligning the story with the learner? That’s the designer’s job, but they’re not equipped. And tarted up quiz show templates aren’t the answer.
Finally, the folks buying the learning are equally complicit. Again, they don’t know the important distinctions, so they’re told it’s soundly instructionally designed, and it looks professional, and they buy the cheapest that meets the criteria. But so much is coming from broken objectives, rote understanding of design, and other ways it can go off the rails, that most of it is a waste of money.
Frankly, the whole design part is commoditized. If you’re competing on the basis of hourly cost to design, you’re missing the point. Design is critical, and the differences between effective learning and clicky-clicky-bling-bling are subtle. Everyone gets paying for technology development, but not the learning design. And it’s wrong. Look, Apple’s products are fantastic technologically, but they get the premium placing by the quality of the experience, and that’s coming from the design. It’s the experience and outcome that matters, yet no one’s investing in learning on this basis.
It’s all understandable of course (sort of like the situation with our schools), but it’s not tolerable. The costs are high:meaningless jobs, money spent for no impact, it’s just a waste. And that’s just for courses; how about the times the analysis isn’t done that might indicate some other approach? Courses cure all ills, right?
I’m not sure what the solution is, other than calling it out, and trying to get a discussion going about what really matters, and how to raise the game. Frankly, the great examples are all too few. As I’ve already pointed out in a previously referred post, the awards really aren’t discriminatory. I think folks like the eLearning Guild are doing a good job with their DevLearn showcase, but it’s finger-in-the-dike stuff.
Ok, I’m on a tear, and usually I’m a genial malcontent. But maybe it’s time to take off the diplomatic gloves, and start calling out garbage when we see it. I’m open to other ideas, but I reckon it’s time to do something.
Level of ‘levels’
I was defending Kirkpatrick’s levels the other day, and after being excoriated by my ITA colleagues, I realized there was not only a discrepancy between principle and practice, but between my interpretation and as it’s espoused. Perhaps I’ve been too generous.
The general idea is that there are several levels at which you can evaluate interventions:
- whether the recipient considered the intervention appropriate or not
- whether the recipient can demonstrate new ability after the intervention
- whether the intervention is being applied in the workplace, and
- whether the intervention is impacting desired outcomes.
That this is my interpretation became abundantly clear. But let’s start with what’s wrong in practice.
In practice, first, folks seem to think that just doing level 1 (‘smile sheets’) is enough. Far fewer people take the next logical step and assess level 2. When they do, it’s too often a knowledge test. Both of these fail to understand the intention: Kirkpatrick (rightly) said you have to start at level 4. You have to care about a business outcome you’re trying to achieve, and then work backwards: what performance change in the workplace would lead to the desired outcome. Then, you can design a program to equip people to perform appropriately and determine whether they can, and finally see if they like it. And, frankly, level 1 is useless until you finally have had the desired impact, and then care to ensure a desirable user experience. As a standalone metric, it ranks right up there with measuring learning effectiveness by the pound of learners served.
Now, one of the things my colleagues pointed out to me, beyond the failure in implementation, is that Kirkpatrick assumes that it has to be a course. If it’s just misused, I can’t lay blame, but my colleagues proceeded to quote chapter and verse from the Kirkpatrick site to document that the Kirkpatricks do think courses are the solution. Consequently, any mention of Kirkpatrick only reinforces the notion that courses are the salve to all ills.
Which I agree is a mindset all too prevalent, and so we have to be careful of any support that could lead a regression to the status quo. Courses are fine when you’ve determined that a skill gap is the problem. And then, applying Kirkpatrick starting with Level 4 is appropriate. However, that’s more like 15% of the time, not 100%.
So where did I go wrong? As usual, when I look at models, I abstract to a useful level (my PhD focused on this, and Felice Ohrlich did an interesting study that pointed out how the right level of abstraction is critical). So, I didn’t see it tied to courses, but that it could in principle be used for performance support as well (at least, levels 3 and 4). Also for some social learning interventions.
Moreover, I was hoping that by starting at level 4, you’d look to the outcome you need, and be more likely to look at other solutions as well as courses. But I had neglected to note the pragmatic issue that the Kirkpatrick’s imply courses are the only workplace intervention to move the needles, and that’s not good. So, from now on I’ll have to be careful in my reference to Kirkpatrick.
The model of assessing the change needed and working backward is worthwhile, as is doing so systematically. Consequently, at an appropriate level of abstraction, the model’s useful. However, in it’s current incarnation it carries too much baggage to be recommended without a large amount of qualification.
So I’ll stick to talking about impacting the business, and determining how we might accomplish that, rather than talk about levels, unless I fully qualify it.