In a clever talk, Aaron Dignan used game theory to talk about how to improve the workplace.
Barrier to scale?
I was part of a meeting about online learning for an institution, and something became clear to me. We were discussing MOOCs (naturally, isn’t everyone?), and the opportunities for delivering quality learning online. And that’s where I saw a conflict that suggested a fundamental barrier to scale.
When I think about quality learning, the core of it is, to me, about the learning activity or experience. And that means meaningful problems with challenge and relevance, more closely resembling those found in the real world than ones typically taught in schools and training. There’s more.
The xMOOCs that I’ve seen have a good focus on quality assessment aligned to the learning goal, but there’s a caveat. Their learning goals have largely been about cognitive skills, about how to ‘do’. And I’m a big fan of focusing on ‘do’, not know. But I recognize there’s more, there’s to ‘be’. That is, even if you have acquired skills in something like AI programming, that doesn’t mean you’re ready to be employed as an AI programmer. There’s much more. For instance, how to keep yourself up to date, how to work well with others, what are the nature of AI projects, etc.
It also came up that when polled, a learned committee suggested top things to learn were to lead, to work well on a team, communicate, etc. These are almost never developed by working on abstract problems. In fact, I’d suggest that the best activities are meaningful, challenging, and collaborative. The power of social learning, of working together to receive other viewpoints and negotiate a shared understanding, and creating a unique response to the challenge, is arguably the best way to learn.
Consequently, it occurs to me, that you simply cannot make a quality learning experience that can be auto-assessed. It needs to be rich, and mentored, scaffolded, and evaluated. Which means that you have real trouble scaling a quality learning experience. Even with peer assessment, there’s some need for human intervention with every group’s process and product. Let alone generating the beneficial meta-learning aspects that could come from this.
So, while there are real values to be developed from MOOCs, like developing perhaps some foundation knowledge and skills, ultimately a valuable education will have to incorporate some mechanism to handle meaningful activities to develop the desirable deep understanding. A tiered model, perhaps? This is still embryonic, but it seems to me that this is a necessary step on the way to a real education in a domain.
Leaving Trails
So I was away for the weekend at a retreat with like-minded souls, Up to All of Us, thinking deeply about the issues that concern us. I walked away with some new and renewed friendships, relaxed, and with a few new thoughts. Two memes stuck with me, and the first was “leaving trails”.
For context, the event featured designers – graphic, industrial, visual – but mostly learning designers. In a session on supporting the growth of design awareness, we were being led through an exercise on body-storming (using role plays to work through issues), and one of the elements that surfaced was posting your designs on the walls in places where it’s hard to see others’ work. And I had two reactions to this, the first being that the ability to share work was a culture issue, but the other was a transparency issue.
The point that I brought up was that just seeing the work wasn’t enough, ideally you’d want to understand what was the thinking behind it (not just working out loud, but thinking out loud). That can come from a conversation around the work, but that’s not always possible (particularly if it’s a virtual wall).
And I thought the leader of the exercise, an eloquent and experienced designer, said that you couldn’t really annotate your thoughts about the work. Which I fundamentally disagreed with, but he then went on to talk about showing interim work, specs, etc (and I’m filling in here with some inferences because memory’s not perfect).
What emerged in my thinking was the phrase leaving trails, not just your work, but the trajectories, constraints, and more. As I’ve argued before, I think showing the thinking behind decisions is going to be increasingly important at every level. At workgroup level, individuals will be better able to collaborate if their (prior) work is detailed. Communities of practice similarly need such evidence. Another colleague also presented work on B Corps, benefit corporations, in which businesses will move from shareholder returns to missions, and such transparency will be necessary here as well as for eGovernment. I reckon, what with ClueTrain, any org that isn’t being transparent enough will lose trust.
Of course, the comfort level in sharing gets back to the culture issue: people have to be safe to share their work and give and receive feedback in constructive ways to move forward. Which is really the subject of the next meme.
(NB: one of the principles of the event is Chatham House Rule, which basically says you can’t share personal details without prior approval, and I didn’t ask, so the perpetrators and victims shall remain nameless.)
Norman’s Design of Future Things
Donald Norman’s book, The Design of Everyday Things is a must-read for anyone who creates artifacts or interfaces for humans. This one goes forward in the same vein, but talking about how new tech in the roughly 20 years since that book came out, and the implications. There are some interesting thoughts, though few hints for learning.
In the book, Don talks about how new technologies are increasingly smart, e.g. cars are almost self-driving (and since the book was published back in 2007, they’re now already on the cusp). As a consequence, we have to start thinking deeply about when and where to automate, having technologies make decisions, versus when we’re in the loop. And, in the latter case, when and how we’re kept alert (pilots lose attention trying to monitor an auto-pilot, even falling asleep).
The issue, he proposes, is that tenuous relationship between an aware partner and the human. He uses the relationship between a horse and rider as an example, talking about loose-rein control and close-rein control. Again, there are times the rider can be asleep (I recall a gent in an Irish pub bemoaning the passing of the days when “the horse knew the way home”).
He covers a range of data points from existing circumstances as well as experiments in new approaches. This ranges from noise to crowd behavior. For noise, he looks at how the way mechanical things made noises were clues to their state and operation, and that we’re losing those clues as we increasingly make things quiet. Engineers are even building in noise as a feature when it’s disappeared via technical sophistication. For crowd behavior, one example is how the removal of street signs in a couple of cities have reduced accidents.
At the end, he comes up with a set of design principles:
- Provide rich, complex, and natural signals
- Be predictable
- Provide a good conceptual model
- Make the output understandable
- Provide continual awareness, without annoyance
- Exploit natural mapping to make interaction understandable and effective
For learning, he talks about how robots that teach are one place in which such animated and embodied avatars make sense, whereas in may situations they’re more challenging. He talks about how they don’t need much mobility, can speak, and can be endearing. Not to replace teachers, but to supplement them. Certainly we have the software capability, but we have to wonder what sort of system makes sense to invest in the actual embodiment versus speaking from a mobile device or computer.
As an exercise, I looked at his design principles to see what might transfer over to the design of learning experiences. The main issue is that in learning, we want the learner facing problems, focusing on the task of creating a solution with overt cognitive awareness, as opposed to an elegant, almost unconscious, accomplishment of a goal. This suggests that rule 2, ‘be predictable’, might be good in non-critical areas of focus, but not in the main area. The rest seem appropriate for learning experiences as well.
This is a thoughtful book, weaving a number of elements together to capture a notion, not hammer home critical outcomes. As such, it is not for the casual designer, but for those looking to take their design to the ‘next level’, or consider the directions that will be coming, and how we might prepare people for them. Just as Don proposed that the interface design folks should be part of the product design team in The Invisible Computer, so too should the product support specialists, sales training team, and customer training designers be part of the design team going forward, as the considerations of what people will have to learn to use new systems are increasingly a concern in the design of systems, not just products.
Performance support-ing learning
In a post last week, I mentioned how Gloria Gery’s original vision of performance support not only was supposed to help you in the moment, it was also – at least in principle – of developing you over time. And yet I have yet to see it. So what am I talking about?
Let’s use an example. I think of the typical GPS as one of the purest models of performance support: it knows where you’re trying to go (since you tell it), and it helps you every step of the way. It can even adapt if you make a mistake. It will get you there.
However, the GPS will tell you nothing about the rationale it’s using to choose your route, which can seem different than one you might have chosen on your own. Even if it offers you alternatives, or you specify preferences like ‘no toll roads’, the underlying reasoning isn’t clear. Yet this might be an opportunity for navigational learning (e.g. “this route has more lights, so we prefer the slightly longer one with fewer opportunities for stopping”).
Nor does it help you learn anything along the way: geography, political boundaries, even geology, although it could do any of these with only a thin veneer of extra work: “as we cross the river, we are also crossing the boundary between X county and Y; in 1643 the pressure between the two cities of X1 and Y1 jockeying for power led to this settlement that shared the water resource.”
It could go further, using this as an example of a greater phenomena: “geographic features often serve as political boundaries, including mountains and rivers as well as oceans”. This latter would, in a sensible approach, only be used a few times (as the message,nonce known, could become annoying. And, ideally, you could choose what you wanted to learn about.
This isn’t limited to GPS, this could be used in any instance of guided performance. Sometimes you might not care (e.g. I suspect most users of Turbo Tax don’t want to know about the nuances of the tax, they just want it done!), but if you want people to understand the reasoning as a boost to more expert performance, e.g. so they can then start using that model to infer how to deal with things that fall outside of the range of performance support, this is a missed opportunity.
The point is to have even our programs to be ‘thinking out loud‘, both to help us learn, and to serve as a check on validity. Sure, it should be able to be shut off or customized, but the processing going on provides an opportunity for learning to happen in new and meaningful ways. The more we can couple the concept to the context, the more we can create learning that will really stick. And that is, or should be, the real goal.
Roger Schank #eli3 Keynote Mindmap
Michael Moore #eli3 Keynote Mindmap
Steve Wozniak #eli3 Keynote Mindmap
The legendary Steve “The Woz” Wozniak was the opening keynote at the 3rd International Conference of e-Learning and Distance Learning. In a wide-ranging, engaging, and personal speech, Steve made a powerful plea for the value of the thoughtful learner and intrinsic motivation, project-based learning, social, and self-paced learning.
Real mLearning
Too many times, at conference expos and advertisements, it appears that folks are trying to say that courses on a tablet (or phone) are mlearning. On the contrary, I’ll suggest that courses on a phone or a tablet are elearning. Then, what is mlearning?
My argument is pretty simple: just because courses are on a different device, if they’re a traditional course – page turning with knowledge test, a virtual classroom, or even a simulation – if it’s only made touch-enabled, it’s still just elearning. Even if you strip it down to work on a phone, minimizing text, how is it really, qualitatively different?
Now, if you start breaking it up into chunks, and distributing it over time, we’re in a bit of a grey area, but really, isn’t that just what we should be doing in elearning, too? Learning needs to be distributed, but this is still just a greater degree of convenience than doing the same on a laptop. It’s a quantitative shift, not tapping into the inherent nature of mobile.
So, when is it really mlearning? I want to suggest that mlearning – and here I’m talking about courses, not mobile performance support, mobile social, etc, which also could and should be considered mlearning or at least mperformance – is when you’re using the local context to support learning. That could be restated as when you are turning a performance situation into a learning situation, wrapping the performance context with resources and support to take a performance experience and turn it into a learning experience.
Most of our formal learning involves what IBM termed ‘work-apart’ learning, something that happens away from your regular job. And most training and online learning are just that, separated from work. We artificially create contexts that mimic the workplace in most of our learning. And there are occasionally good reasons to do that, like handling multiple people and when failure can be costly or expensive.
Now, however, when we can bring digital technology wherever we are, we can use our real work to be the base of the learning experience. We don’t need an external context, we’re in one! We can provide concepts, examples, and feedback around real contextualized practice. Or, we can add a layer to performance support that educates, not just supports, as Gloria Gery had proposed (but is still to be seen).
And, if the work context is using the desktop, then mobile isn’t necessarily a sensible solution. However, on those increasing circumstances when we’re on a site visit, meeting, at an event, and generally away from our desks, mlearning as I’m construing it here makes sense.
I don’t want to discount the value of elearning on mobile devices, particularly on tablets (where I have argued that the intimacy may have uniquely beneficial impacts), but I do think we shouldn’t consider context-free courses on a small device as anything other than just elearning. So, the question I’m wrestling with is whether mlearning includes mobile performance support, informal, etc, or do we want a separate term for that? But I kinda do want to keep mlearning from not meaning ‘courses on a phone (or tablet)’. What say you?
Living with Complexity
Don Norman (disclaimer, my PhD advisor and mentor) has had a string of important books, starting with his stellar Design of Everyday Things (tops my ‘recommended books’ list for designers). His latest, Living with Complexity, is not as landmark a book as that, but it has some very astute thinking to present.
The book, as the title implies, is largely about how complexity isn’t bad, it’s necessary, and the real issue is about designing to manage it. We want powerful systems to accomplish meaningful goals, and he makes the case that this naturally requires complexity, either at the front end or at the back end. Complexity at the front end offers powerful choice at the tradeoff of comprehensibility, which we often want. Complexity at the back end can seem like magic, but offers more opportunity for things to go wrong catastrophically.
Good design is naturally the solution. He suggests that good design makes complexity usable, and bad design makes complexity frustrating. And he makes a strong point that it’s now about services.
He goes beyond product design in detailing how you really aren’t designing just a product, but an experience, and that it takes a system to create an experience. Using Apple’s iPod, he points out how simplifying the purchasing (backend: lining up publishers to allow downloading individual titles for a simple fee) and downloading music (instead of converting files and storing in special folders) made a device that could carry a lot of music in a small package.
He goes deeper into service design, using the examples of waiting in lines (I now know why immigration in SFO can be so frustrating!). He finally gets to coverage of recommendations for improvements, including signifying (making affordances perceivable), checklists, and job aids (over courses). His focus is on tapping into how our minds work, and aligning tools with them. He covers both sides, including what designers should do differently, and what ‘consumers’ can do. He also covers some of the mismatches between design and consumers, going beyond the design to the overall system.
Overall, while seemingly not as well structured as previous books, this book offers some advanced thinking into design that will benefit those looking to take a bigger picture. Feeling more like a collection rather than a coherent narrative, each of the elements is related and there are important insights in each section. Recommended for the advanced designer.