My mindmap of Sunday’s activities at Up To All Of Us.
UTAOU Saturday Mindmap
Designing the killer experience
I haven’t been put in the place of having ultimate responsibility for driving a complete user experience for over a decade, though I’ve been involved in advising on a lot on many such. But I continue my decades long fascination with design, to the extent that it’s a whole category for my posts! An article on how Apple’s iPhone was designed caused me to reflect.
On one such project, I asked early on: “who owns the vision?” The answer soon became clear that no one had complete ownership. Their model was having a large-scope goal, and then various product managers take pieces of that, and negotiated for budget, with vendors for resources, and with other team members for the capability to implement their features. And this has been a successful approach for many internet businesses, project managers owning their parts.
I compare that to the time I led a team, a decade ago developing a learning system, and I laid out and justified a vision, gave them each parts, and while they took responsibility for their part of the interlocking responsibilities, I was responsible for the overall experience.
Which is not to say by any means was I as visionary as Steve Jobs. In the article, he apparently told his iPhone team to start from a premise “to create the first phone that people would fall in love with”. I like to think that I was working towards that, but I clearly hadn’t taken ownership of such a comprehensive vision, though we were working towards one in our team.
And we were a team. Everyone could offer opinions, and the project was better because of it. I did my best to make it safe for everyone’s voice to be heard. We met together weekly, I had everyone backing up someone else’s area of responsibility, and they worked together as much as they worked with me. In many ways, my role was to protect them from bureaucracy just as my boss’ role was to protect me from interference. And it worked: we got a working prototype up and running before the bubble burst.
(I remember one time, the AI architect and the software engineer came in asking me to resolve an issue. At the end of it I didn’t fully understand the issue, yet they profoundly thanked me even though we all three knew I hadn’t contributed anything but the space for them to articulate their two viewpoints. They left having found a resolution that I didn’t have to understand.)
And I don’t really don’t know what the answer is, but my inclination is that giving folks a vibrant goal and asking them to work together to make it so, rather than giving individuals tasks that can compete to succeed. I can see the virtues of Darwinian selection, but I have to believe, based upon things like Dan Pink’s Drive and my work with my colleagues in the Internet Time Alliance, that giving a team a noble goal, resourcing them, and giving them the freedom to pursue it, is going to lead to a greater outcome. So, what do you think?
Reviewing elearning examples
I recently wrote about elearning garbage, and in case I doubted my assessment, today’s task made my dilemma quite clear. I was asked to be one of the judges for an elearning contest. Seven courses were identified as ‘finalists’, and my task was to review each and assign points in several categories. Only one was worthy of release, and only one other even made a passing grade. This is a problem.
Let me get the good news out of the way first. The winner, (in my mind; the overall findings haven’t been tabulated yet) did a good job of immediately placing the learner in a context with a meaningful task. It was very compelling stuff, with very real examples, and meaningful decisions. The real world resources were to be used to accomplish the task (I cheated; I did it just by the information in the scenarios), and mistakes were guided towards the correct answer. There was enough variety in the situations faced to cover the real range of possibilities. If I were to start putting this information into practice in the real world, it might stick around.
On the other hand, there were the six other projects. When I look at my notes, there were some common problems. Not every problem showed up in every one, but all were seen again and again. Importantly, it could easily be argued that several were appropriately instructionally designed, in that they had clear objectives, and presented information and assessment on that information. Yet they were still unlikely to achieve any meaningfully different abilities. There’s more to instructional design than stipulating objectives and then knowledge dump with immediate test against those objectives.
The first problem is that most of them were information objectives. There was no clear focus on doing anything meaningful, but instead the ability to ‘know’ something. And while in some cases the learner might be able to pass the test (either because they can keep trying ’til they get it right, or the alternatives to the right answer were mind-numbingly dumb; both leading to meaningless assessment), this information wasn’t going to stick. So we’ve really got two initial problems here, bad objectives and bad assessment..
In too many cases, also, there was no context for the information; no reason how it connected to the real world. It was “here’s this information”. And, of course, one pass over a fairly large quantity with some unreasonable and unrealistic expectation that it would stick. Again, two problems: lack of context and lack of chunking. And, of course, tests for random factoids that there was no particular reason to remember.
But wait, there’s more! In no case was there a conceptual model to tie the information to. Instead of an organizing framework, information was presented as essentially random collections. Not a good basis for any ability to regenerate the information. It’s as if they didn’t really care if the information actually stuck around after the learning experience.
Then, a myriad of individual little problems: bad audio in two, dull and dry writing pretty much across the board, even timing that of course meant you were either waiting on the program, or it was not waiting on you. The graphics were largely amateurish.
And these were finalists! Some with important outcomes. We can’t let this continue, as people are frankly throwing money on the ground. This is a big indictment of our field, as it continues to be widespread. What will it take?
Will tablets diverge?
After my post trying to characterize the differences between tablets and mobile, Amit Garg similarly posted that tablets are different. He concludes that “a conscious decision should be made when designing tablet learning (t-learning) solutions”, and goes further to suggest that converting elearning or mlearning directly may not make the most sense. I agree.
As I’ve suggested, I think the tablet’s not the same as a mobile phone. It’s not always with you, and consequently it’s not ready for any use. A real mobile device is useful for quick information bursts, not sustained attention to the device. (I’ll suggest that listening to audio, whether canned or a conversation, isn’t quite the same, the mobile device is a vehicle, not the main source of interaction.) Tablets are for more sustained interactions, in general. While they can be used for quick interactions, the screen size supports more sustained interactions.
So when do you use tablets? I believe they’re valuable for regular elearning, certainly. While you would want to design for the touch screen interface rather than mimic a mouse-driven interaction. Of course, I believe you also should not replicate the standard garbage elearning, and take advantage of rethinking the learning experience, as Barbara Means suggested in the SRI report for the US Department of Education, finding that eLearning was now superior to F2F. It’s not because of the medium itself, but because of the chance to redesign the learning.
So I think that tablets like the iPad will be great elearning platforms. Unless the task is inherently desktop, the intimacy of the touchscreen experience is likely to be superior. (Though more than Apple’s new market move, the books can be stunning, but they’re not a full learning experience.) But that’s not all.
Desktops, and even laptops don’t have the portability of a tablet. I, and others, find that tablets are taken more places than laptops. Consequently, they’re available for use as performance support in more contexts than laptops (and not as many as smart or app phones). I think there’ll be a continuum of performance support opportunities, and constraints like quantity of information (I’d rather look at a diagram on a tablet) constraints of time & space in the performance context, as well as preexisting pressures for pods (smartphone or PDA) versus tablets will determine the solution.
I do think there will be times when you can design performance support to run on both pads and pods, and times you can design elearning for both laptop and tablet (and tools will make that easier), but you’ll want to do a performance context analysis as well as your other analyses to determine what makes sense.
Stop creating, selling, and buying garbage!
I was thinking today (on my plod around the neighborhood) about how come we’re still seeing so much garbage elearning (and frankly, I had a stronger term in mind). And it occurred to me that their are multitudinous explanations, but it’s got to stop.
One of the causes is unenlightened designers. There are lots of them, for lots of reasons: trainers converted, lack of degree, old-style instruction, myths, templates, the list goes on. You know, it’s not like one dreams of being an instructional designer as a kid. This is not to touch on their commitment, but even if they did have courses, they’d likely still not be exposed to much about the emotional side, for instance. Good learning design is not something you pick up in a one week course, sadly. There are heuristics (Cat Moore’s Action mapping, Julie Dirksen’s new book), but the necessary understanding of the importance of the learning design isn’t understood and valued. And the pressures they face are overwhelming if they did try to change things.
Because their organizations largely view learning as a commodity. It’s seen as a nice to have, not as critical to the business. It’s about keeping the cost down, instead of looking at the value of improving the organization. I hear tell of managers telling the learning unit “just do that thing you do” to avoid a conversation about actually looking at whether a course is the right solution, when they do try! They don’t know how to hire the talent they really need, it’s thin on the ground, and given it’s a commodity, they’re unlikely to be willing to really develop the necessary competencies (even if they knew what they are).
The vendors don’t help. They’ve optimized to develop courses cost-effectively, since that’s what the market wants. When they try to do what really works, they can’t compete on cost with those who are selling nice looking content, with mindless learning design. They’re in a commodity market, which means that they have to be efficiency oriented. Few can stake out the ground on learning outcomes, other than an Allen Interactions perhaps (and they’re considered ‘expensive’).
The tools are similarly focused on optimizing the efficiency of translating PDFs and Powerpoints into content with a quiz. It’s tarted up, but there’s little guidance for quality. When it is, it’s old school: you must have a Bloom’s objective, and you must match the assessment to the objective. That’s fine as far as it goes, but who’s pushing the objectives to line up with business goals? Who’s supporting aligning the story with the learner? That’s the designer’s job, but they’re not equipped. And tarted up quiz show templates aren’t the answer.
Finally, the folks buying the learning are equally complicit. Again, they don’t know the important distinctions, so they’re told it’s soundly instructionally designed, and it looks professional, and they buy the cheapest that meets the criteria. But so much is coming from broken objectives, rote understanding of design, and other ways it can go off the rails, that most of it is a waste of money.
Frankly, the whole design part is commoditized. If you’re competing on the basis of hourly cost to design, you’re missing the point. Design is critical, and the differences between effective learning and clicky-clicky-bling-bling are subtle. Everyone gets paying for technology development, but not the learning design. And it’s wrong. Look, Apple’s products are fantastic technologically, but they get the premium placing by the quality of the experience, and that’s coming from the design. It’s the experience and outcome that matters, yet no one’s investing in learning on this basis.
It’s all understandable of course (sort of like the situation with our schools), but it’s not tolerable. The costs are high:meaningless jobs, money spent for no impact, it’s just a waste. And that’s just for courses; how about the times the analysis isn’t done that might indicate some other approach? Courses cure all ills, right?
I’m not sure what the solution is, other than calling it out, and trying to get a discussion going about what really matters, and how to raise the game. Frankly, the great examples are all too few. As I’ve already pointed out in a previously referred post, the awards really aren’t discriminatory. I think folks like the eLearning Guild are doing a good job with their DevLearn showcase, but it’s finger-in-the-dike stuff.
Ok, I’m on a tear, and usually I’m a genial malcontent. But maybe it’s time to take off the diplomatic gloves, and start calling out garbage when we see it. I’m open to other ideas, but I reckon it’s time to do something.
Performance Architecture
I’ve been using the tag ‘learning experience design strategy’ as a way to think about not taking the same old approaches of events über ales. The fact of the matter is that we’ve quite a lot of models and resources to draw upon, and we need to rethink what we’re doing.
The problem is that it goes far beyond just a more enlightened instructional design, which of course we need. We need to think of content architectures, blends between formal and informal, contextual awareness, cross-platform delivery, and more. It involves technology systems, design processes, organizational change, and more. We also need to focus on the bigger picture.
Yet the vision driving this is, to me, truly inspiring: augmenting our performance in the moment and developing us over time in a seamless way, not in an idiosyncratic and unaligned way. And it is strategic, but I’m wondering if architecture doesn’t better capture the need for systems and processes as well as revised design.
This got triggered by an exercise I’m engaging in, thinking how to convey this. It’s something along the lines of:
The curriculum’s wrong:
- it’s not knowledge objectives, it’s skills
- it’s not current needs, it’s adapting to change
- it’s not about being smart, it’s about being wise
The pedagogy’s wrong:
- it’s not a flood, but a drip
- it’s not knowledge dump, it’s decision-making
- it’s not expert-mandated, instead it’s learner-engaging
- it’s not ‘away from work’, it’s in context
The performance model is wrong:
- it’s not all in the head, it’s distributed across tools and systems.
- it’s not all facts and skill, it’s motivation and confidence
- it’s not independent, it’s socially developed
- it’s not about doing things right, it’s about doing the right thing
The evaluation is wrong:
- it’s not seat time, it’s business outcomes
- it’s not efficiency, at least until it’s effective
- it’s not about normative-reference, it’s about criteria
So what does this look like in practice? I think it’s about a support system organized so that it recognizes what you’re trying to do, and provides possible help. On top of that, it’s about showing where the advice comes from, developing understanding as an additional light layer. Finally, on top of that, it’s about making performance visible and looking at the performance across the previous level, facilitating learning to learn. And, the underlying values are also made clear.
It doesn’t have to get all that right away. It can start with just better formal learning design, and a bit of content granularity. It certainly starts with social media involvement. And adapting the culture in the org to start developing meta-learning. But you want to have a vision of where you’re going.
And what does it take to get here? It needs a new design that starts from the performance gap and looks at root causes. The design process then onsiders what sort of experience would both achieve the end goal and the gaps in the performer equation (including both technology aids and knowledge and skill upgrades), and consider how that develops over time recognizing the capabilities of both humans and technology, with a value set that emphasis letting humans do the interesting work. It’ll also take models of content, users, context, and goals, with a content architecture and a flexible delivery model with rich pictures of what a learning experience might look like and what learning resources could be. And an implementation process that is agile, iterative, and reflective, with contextualized evaluation. At least, that sounds right to me.
Now, what sounds right to you: learning experience design strategy, performance system design, performance architecture, <your choice here>?
Authentic Learning
This week, #change11 is being hosted by Jan Herrington (who I had the pleasure of meeting in West Australia many years ago; highly recommended). She’s talking about authentic learning, and has a nice matrix separating task type and setting to help characterize her approach. It’s an important component of making our learning more effective. On the way home from my evening yesterday, I wrote up some notes about a learning event I attended, that seem to be perfectly appropriate in this context:
I had the pleasure of viewing some project presentations from Hult Business School, courtesy of Jeff Saperstein. It’s an interesting program; very international, and somewhat non-traditional.
In this situation, the students had been given a project by a major international firm to develop recommendations for their social business. I saw five of the teams present, and it was fascinating. I found out that they balanced the teams for diversity (students were very clearly from around the world including Europe, Asia, and Latin America), and they got some support in working together as well.
Overall, the presentations were quite well done. Despite some small variation in quality and one very unique approach to the problem, I was impressed with the coherence of the presentations and the solidity of the outcomes. Some were very clearly uniquely innovative new ideas that would benefit the firm.
The process was good too; the firm had organized a visit to their local (world class) research center, and were available (through a limited process) for questions. A representative of the firm heard the presentations (through Skype!) and provided live feedback. He was very good, citing all the positives and asking a few questions.
Admittedly they had some lack of experience, but when I think how I would’ve been able to perform at that age, I really recognized the quality of the outcome.
This sort of grounded practice in addressing real questions in a structured manner is a great pedagogy. The students worked together on projects that were meaningful to them both in being real and being interesting, and received meaningful feedback. You get valuable conceptual processing and meta-skills as well. The faculty told me afterward that many of these students had worked only in their home company prior to this, but after this diverse experience, they were truly globally-ready.
How are you providing meaningful learning experiences?
Further (slow) thoughts on learning #change11
I’ve been monitoring the comments on my #change11 posts, and rather than address them individually, I’m posting responses. So, a couple of questions have recurred about the slow learning concept. One is how the notion of quick small bites reflects a slower learning process. Another is how it might play out in the organization. And a final one is about the overall pedagogy.
To address the first one, the notion is that the learnings are wrapped around the events in your life, not where you’re taken away from the context of your life to have a learning experience. I think of this as embedded learning versus event learning. Yes, it’s quick bits, but they don’t mean as much on their own as in their cumulative effect over time. Whereas the cumulative effect of the event model dissipates quickly, the distributed model builds slowly!
To address the pedagogy, it’s about having little bits of extra information that connect to the events in your life, not separate (unless the events in your life aren’t frequent enough, and then we might create little ‘alternate reality’ events that create plausible and fun scenarios that also provide the desired practice to develop you on the path. It’s not breaking up event-based learning into smaller chunks so much as wrapping around the meaningful events in your life, when possible.
And that pedagogy will very much be our choice. I do hope we can take the opportunity to include a sufficient level of challenge, and the opportunity to personalize it, rather than keep it generic. Consider minimalist approaches, weave in learning-to-learn, connecting people as well as providing additional information. For instance, we should be asking personalization questions afterward (whether via system or person). The algorithms hopefully will have some serendipity as well as relationships to my personal experience. Some elliptical material. This would support discovering new relationships in learning, as well, as we mine the effects of some random juxtapositions across many experiences.
How to make this practical in organizations worried about immediate productivity? In my experience, it’s already happening. Folks are (trying) to take responsibility for their learning. They take the social media cigarette breaks to go out and connect to their networks when the office blocks access through the firewall. They’re discussing work topics in LinkedIn groups, and using Twitter to both track new things and to get questions answered. The question really is whether orgs will ignore or hamper this versus facilitating it. That’s why I’m part of the Internet Time Alliance, where we are working with organizations to help them start supporting learning, not just offering training.
We do see small bits of moves toward slow learning, but I don’t like to assume everyone’s yet capable of taking ownership of it. And, yes, the sad state of the world is that typical schooling and old-style management can squelch the love of learning and not develop the skills that are needed. We have multiple challenges, and I’m just suggesting the concept of slow learning, a drip-irrigation versus flood metaphor, is a wedge to help drive us out of the event-based model and start addressing the issues raised: pedagogy, curricula, infrastructure, technology, politics, and more. The efforts to build such a system, I reckon, will force progress on many fronts. Whether it’s the best approach to do that is a separate question. I welcome your thoughts.
And a thanks to all for their participation this week, it’s been a learning experience for me as well!
Making Slow Learning Concrete #change11
It occurs to me that I’ve probably not conveyed in any concrete terms what I think the ‘slow learning’ experience might be like. And I admit that I’m talking a technology environment in the concrete instance (because I like toys). So here are some instances:
Say you’ve a meeting with a potential client. You’ve been working on how to more clearly articulate the solutions you offer and listening to the customer to establish whether there’s a match or not. You’ve entered the meeting into your calendar, and indicated the topic by the calendar, tags, the client, or some other way. So, shortly before the meeting, your system might send you some reminder that both reiterated the ‘message’ you’d worked out, and reminded you about pulling out the client’s issues. Then, there might be a tool provided during the meeting (whether one you’d created, one you’d customized, or a stock one) to help capture the important elements. Afterwards, the system might provide you with a self-evaluation tool, or even connect you to a person for a chat.
Or, say, you’re walking around a new town. Your system might regularly suggest some topics of interest, depending on your interest showing architecture, history, or socioeconomic indicators. You could ignore them, or follow them up. Ideally, it’d also start connecting some dots: showing a picture from a previous trip and suggesting “Remember we saw an example of <this> architect use here? Well, right here we have the evolution of that form; see how the arches have…” So it’s making connections for you. You can ignore, pursue further, or whatever. It might make a tour for you on the fly, if you wanted. If you were interested in food, it might say: “we’ve been exploring Indian food, you apparently have no plans for dinner and there’s an Indian restaurant near here that would be a way to explore Southern Indian cuisine”.
Another situation might be watching an event, and having extra information laid on top. So instead of just watching a game, you could see additional information that is being used by the coaches to make strategic decisions: strengths and weaknesses of the opposing team in this context, intangible considerations like clock management, or the effects of wind.
And even in formal schooling, if you’re engaged in either an individual or group problem, it might well be available to provide a hint, as well as of course tools to hand.
The notion is that you might have more formal and informal goals, and the system would layer on information, augmenting your reality with extra information aligned to your interests and goals, making the world richer. It could and would help performance in the moment, but also layer on some concepts on top.
I see this as perhaps a mobile app, that has some way of notifying you (e.g. it’s own signature ‘sound’), a way to sense context, and more. It might ask for your agreement to have a link into the task apps you use, so it has more context information, but also knows when and where you are.
This isn’t the only path to slow learning. Ideally, it’d just be a rich offering of community-generated resources and people to connect with in the moment, but to get people ready to take advantage of that it might need some initial scaffolding. Is this making sense?

