Connie Yowell gave a passionate and informing presentation on the driving forces behind digital badges.
Looking forward on content
At DevLearn next week, I’ll be talking about content systems in session 109. The point is that instead of monolithic content, we want to start getting more granular for more flexible delivery. And while there I’ll be talking about some of the options on how, here I want to make the case about why, in a simplified way.
As an experiment (gotta keep pushing the envelope in a myriad of ways), I’ve created a video, and I want to see if I can embed it. Fingers crossed. Your feedback welcome, as always.
Agile?
Last Friday’s #GuildChat was on Agile Development. The topic is interesting to me, because like with Design Thinking, it seems like well-known practices with a new branding. So as I did then, I’ll lay out what I see and hope others will enlighten me.
As context, during grad school I was in a research group focused on user-centered system design, which included design, processes, and more. I subsequently taught interface design (aka Human Computer Interaction or HCI) for a number of years (while continuing to research learning technology), and made a practice of advocating the best practices from HCI to the ed tech community. What was current at the time were iterative, situated, collaborative, and participatory design processes, so I was pretty familiar with the principles and a fan. That is, really understand the context, design and test frequently, working in teams with your customers.
Fast forward a couple of decades, and the Agile Manifesto puts a stake in the ground for software engineering. And we see a focus on releasable code, but again with principles of iteration and testing, team work, and tight customer involvement. Michael Allen was enthused enough to use it as a spark that led to the Serious eLearning Manifesto.
That inspiration has clearly (and finally) now moved to learning design. Whether it’s Allen’s SAM or Ger Driesen’s Agile Learning Manifesto, we’re seeing a call for rethinking the old waterfall model of design. And this is a good thing (only decades late ;). Certainly we know that working together is better than working alone (if you manage the process right ;), so the collaboration part is a win.
And we certainly need change. The existing approaches we too often see involve a designer being given some documents, access to a SME (if lucky), and told to create a course on X. Sure, there’re tools and templates, but they are focused on making particular interactions easier, not on ensuring better learning design. And the person works alone and does the design and development in one pass. There are likely to be review checkpoints, but there’s little testing. There are variations on this, including perhaps an initial collaboration meeting, some SME review, or a storyboard before development commences, but too often it’s largely an independent one way flow, and this isn’t good.
The underlying issue is that waterfall models, where you specify the requirements in advance and then design, develop, and implement just don’t work. The problem is that the human brain is pretty much the most complex thing in existence, and when we determine a priori what will work, we don’t take into account the fact that like Heisenberg what we implement will change the system. Iterative development and testing allows the specs to change after initial experience. Several issues arise with this, however.
For one, there’s a question about what is the right size and scope of a deliverable. Learning experiences, while typically overwritten, do have some stricture that keeps them from having intermediately useful results. I was curious about what made sense, though to me it seemed that you could develop your final practice first as a deliverable, and then fill in with the required earlier practice, and content resources, and this seemed similar to what was offered up during the chat to my question.
The other one is scoping and budgeting the process. I often ask, when talking about game design, how to know when to stop iterating. The usual (and wrong answer) is when you run out of time or money. The right answer would be when you’ve hit your metrics, the ones you should set before you begin that determine the parameters of a solution (and they can be consciously reconsidered as part of the process). The typical answer, particularly for those concerned with controlling costs, is something like a heuristic choice of 3 iterations. Drawing on some other work in software process, I’d recommend creating estimates, but then reviewing them after. In the software case, people got much better at estimates, and that could be a valuable extension. But it shouldn’t be any more difficult to estimate, certainly with some experience, than existing methods.
Ok, so I may be a bit jaded about new brandings on what should already be good practice, but I think anything that helps us focus on developing in ways that lead to quality outcomes is a good thing. I encourage you to work more collaboratively, develop and test more iteratively, and work on discrete chunks. Your stakeholders should be glad you did.
Accreditation and Compliance Craziness
A continued bane of my existence is the ongoing requirements that are put in place for a variety of things. Two in particular are related and worth noting: accreditation and compliance. The way they’re typically construed is barking mad, and we can (and need to) do better.
To start with accreditation. It sounds like a good thing: to make sure that someone issuing some sort of certification has in place the proper procedures. And, done rightly, it would be. However, what we currently see is that, basically, the body says you have to take what the Subject Matter Expert (SME) says as the gospel. And this is problematic.
The root of the problem is that SMEs don’t have access to around 70% of what they do, as research at the University of Southern California’s Cognitive Technology group has documented. However, of course, they have access to all they ‘know’. So it’s easy for them to say what learners should know, but not what learners actually should be able to do. And some experts are better than others at articulating this, but the process is opaque to this nuance.
So unless the certification process is willing to allow the issuing institution the flexibility to use a process to drill down into the actual ‘do’, you’re going to get knowledge-focused courses that don’t actually achieve important outcomes. You could do things like incorporating those who depend on the practitioners, and/or using a replicable and grounded process with SMEs that helps them work out what the core objectives need to be; meaningful ones, ala competencies. And a shoutout to Western Governors University for somehow being accredited using competencies!
Compliance is, arguably, worse. Somehow, the amount of time you spend is the important determining factor. Not what you can do at the end, but instead that you’ve done something for an hour. The notion that amount of time spent relates to ability at this level of granularity is outright maniacal. Time would matter, differently for different folks, but you have to be doing the right thing, and there’s no stricture for that. Instead, if you’ve been subjected to an hour of information, that somehow is going to change your behavior. As if.
Again, competencies would make sense. Determine what you need them to be able to do, and then assess that. If it takes them 30 minutes, that’s OK. If it takes them 5 hours, well, it’s necessary to be compliant.
I’d like to be wrong, but I’ve seen personal instances of both of these, working with clients. I’d really like to find a point of leverage to address this. How can we start having processes that obtain necessary skills, and then use those to determine ability, not time or arbitrary authority! Where can we start to make this necessary change?
3 C’s of Engaging Practice
In thinking through what makes experiences engaging, and in particular making practice engaging, I riffed on some core elements. The three terms I came up with were Challenge, Choices, & Consequences. And I realized I had a nice little alliteration going, so I’m going to elaborate and see if it makes sense to me (and you).
In general, good practice is having the learner make decisions in context. This has to be more than just recognizing the correct knowledge option, and providing a ‘right’ or ‘wrong’ feedback. The right decision has to be made, in a plausible situation with plausible alternatives, and the right feedback has to be provided.
So, the first thing is, there has to be a situation that the learner ‘gets’ is important. It’s meaningful to them and to their stakeholders, and they want to get it right. It has to be clear there’s a real decision that has outcomes that are important. And the difficulty has to be adjusted to their level of ability. If it’s too easy, they’re bored and little learning occurs. If it’s too difficult, it’s frustrating and again little learning occurs. However, with a meaningful story and the right level of difficulty, we have the appropriate challenge.
Then, we have to have the right alternatives to select from. Some of the challenge comes from having a real decision where you can recognize that making the wrong choice would be problematic. But the alternatives must require an appropriate level of discrimination. Alternatives that are so obvious or silly that they can be ruled out aren’t going to lead to any learning. Instead, they need to be ways learners reliably go wrong, representing misconceptions. The benefits are several: first, you can find out what they really know (or don’t), and you have the chance to address them. Also, this assists in having the right level of challenge. So you must have the right choices.
Finally, once the choice is made, you need to have feedback. Rather than immediately have some external voice opine ‘yes’ or ‘no’, let the learner see the consequences of that choice. This is important for two reasons. For one, it closes the emotional experience, as you see what happens, wrapping up the experience. Second, it shows how things work in the world, exposing the causal relationships and assists the learner understanding. Then you can provide feedback (or not, if you’re embedding this single decision in a scenario or game where other choices are precipitated by this choice). So, the final element are consequences.
While this isn’t complete, I think it’s a nice shorthand to guide the design of meaningful and engaging practice. What do you think?
Concrete and Contextual
I’m working on the learning science workshop I’m going to present at DevLearn next month, and in thinking about how to represent the implications of designing to account for how we work better when the learning context is concrete and sufficient contexts are used, I came up with this, which I wanted to share.
The empirical data is that we learn better when our learning practice is contextualized. And if we want transfer, we should have practice in a spread of contexts that will facilitate abstraction and application to all appropriate settings, not just the ones seen in the learning experience. If the space between our learning applications is too narrow, so too will our transfer be. So our activities need to be spread about in a variety of contexts (and we should be having sufficient practice).
Then, for each activity, we should have a concrete outcome we’re looking for. Ideally, the learner is given a concrete deliverable as an outcome that they must produce (that mimics the type of outcome we’re expecting them to be able to create as an outcome of the learning, whether decision, work product, or..). Ideally we’re in a social situation and they’re working as a team (or not) and the work can be circulated for peer review. Regardless, then there should be expert oversight on feedback.
With a focus on sufficient and meaningful practice, we’re more likely to design learning that will actually have an impact. The goal is to have practice that is aligned with how our learning works (my current theme: aligning with how we think, work, and learn). Make sense?
Where in the world is…
It’s time for another game of Where’s Clark? As usual, I’ll be somewhat peripatetic this fall, but more broadly scoped than usual:
- First I’ll be hitting Shenzhen, China at the end of August to talk advanced mlearning for a private event.
- Then I’ll be hitting the always excellent DevLearn in Las Vegas at the end of September to run a workshop on learning science for design (you should want to attend!) and give a session on content engineering.
- At the end of October I’m down under at the Learning@Work event in Sydney to talk the Revolution.
- At the beginning of November I’ll be at LearnTech Asia in Singapore, with an impressive lineup of fellow speakers to again sing the praises of reforming L&D.
- That might seem like enough, but I’ll also be at Online Educa in Berlin at the beginning of December running an mlearning for academia workshop and seeing my ITA colleagues.
Yes, it’s quite the whirl, but with this itinerary I should be somewhere near you almost anywhere you are in the world. (Or engage me to show up at your locale!) I hope to see you at one event or another before the year is out.
Designing Learning Like Professionals
I’m increasingly realizing that the ways we design and develop content are part of the reason why we’re not getting the respect we deserve. Our brains are arguably the most complex things in the known universe, yet we don’t treat our discipline as the science it is. We need to start combining experience design with learning engineering to really start delivering solutions.
To truly design learning, we need to understand learning science. And this does not mean paying attention to so-called ‘brain science’. There is legitimate brain science (c.f. Medina, Willingham), and then there’s a lot of smoke.
For instance, there’re sound cognitive reasons why information dump and knowledge test won’t lead to learning. Information that’s not applied doesn’t stick, and application that’s not sufficient doesn’t stick. And it won’t transfer well if you don’t have appropriate contexts across examples and practice. The list goes on.
What it takes is understanding our brains: the different components, the processes, how learning proceeds, and what interferes. And we need to look at the right levels; lots of neuroscience is not relevant at the higher level where our thinking happens. And much about that is still under debate (just google ‘consciousness‘ :).
What we do have are robust theories about learning that pretty comprehensively integrate the empirical data. More importantly, we have lots of ‘take home’ lessons about what does, and doesn’t work. But just following a template isn’t sufficient. There are gaps where have to use our best inferences based upon models to fill in.
The point I’m trying to make is that we have to stop treating designing learning as something anyone can do. The notion that we can have tools that make it so anyone can design learning has to be squelched. We need to go back to taking pride in our work, and designing learning that matches how our brains work. Otherwise, we are guilty of malpractice. So please, please, start designing in coherence with what we know about how people learn.
If you’re interested in learning more, I’ll be running a learning science for design workshop at DevLearn, and would love to see you there.
Engagement
I had the occasion last week to attend a day of ComicCon. If you don’t know it, it is a conference about comics, but also much, much, more. It covers movies and television, games (computers and board), and more. It is also a pop culture phenomenon, where new releases are announced, analysis and discussion occur, and people dress up. And it is huge!
I have gone to many conferences, and some are big, e.g. ATD’s ICE or Online Educa, or Learning Technology (certainly the exhibit hall). This made the biggest of those seem like a rounding error. It’s more like the SuperBowl. People camp out in line to attend the best panels, and the exhibit hall is so packed that you can hardly move. The conference itself is so big that it maxes out the San Diego Convention Center and spills out into adjoining hotels.
And that is really the lesson: something here is generating mad passion. Such overwhelming interest that there’s a lottery for tickets! I attended once in the very early days, when it was small and cozy (as a college student), but this is something else. I haven’t been to the Oscars, but this is bigger than what’s shown on TV. It’s bigger than E3. Again, I haven’t seen CES since the very early days, but it can’t be much larger. And this isn’t for biz, this is for the people and their own hard earned dollars. In designing learning, we would love to achieve such motivation. So what’s going on?
So first, comics tap into some cultural touchstone; they appear in most (if not all) cultures that have developed mass media. They tell ongoing stories that resonate with individuals, and drive other media including (as mentioned) movies, TV, games, and toys. They can convey drama or comedy, and comment on the human condition with insight and heart. The best are truly works of art (oh, Bill Watterson, how could you stop?).
They use the standard methods of storytelling, strip away unnecessary details, have (even unlikely) heroes and villains, obstacles and triumphs). And they can convey powerful lessons about values and consequences. Things we often are trying to achieve. It’s done through complex characters, compelling narratives, and stylistic artwork. As Hilary Price (author of the comic Rhymes with Orange) told us in a panel, she’s a writer first and an artist second.
We don’t use graphic novel/comic/cartoon formats near enough in learning, and we could and should. Similarly with games, the interactive equivalent, for meaningful practice. I fear we take ourselves too seriously, or let stakeholders keep us from truly engaging our learners. We can and should do better. We need to understand audience engagement, and leverage that in our learning experiences. To restate: it’s not about content, it’s about experience. Are you designing experiences?
Emergent experience?
So I was reading something that talked about designed versus emergent experiences. Certainly we have familiarity with designed experiences: courses/training, film, theater, amusement parks. Yet emergent experiences seem like they’d have some unique outcomes and consequently could be more valuable and memorable. So I wondered how an emergent experience might play out to reliably generate a good experience, regardless.
The issue is that designed experiences, e.g. a Disney ride, are predictable. You can repeat them and notice new things, yet the experience is largely the same. And there can be brilliant minds behind them, and great outcomes including learning. But could and should we shoot higher?
What emergent experiences do we know? Emergent means having to interact with something unpredictable and perhaps even reactive. It could be interacting with systems, or it could be interpersonal interaction. So, what we see in clouds, and experiences we have with games, and certainly interpersonal experiences can be emergent. Can they repeatedly have desired outcomes as well as unpredictable ones?
I think the answer is yes if you allow for the role of some ‘interference’. That is, someone playing a role in controlling the outcomes. This is what happens in Dungeons and Dragons games where there is a Dungeon Master, or in Alternate Reality Game where there’s a Puppet Master, or in social learning where an instructor is structuring group assignments.
I’m interested in the latter, and the blend between. I propose that our desired learning experiences should go beyond fixed designs, as our limitations as designers and SMEs will constrain what outcomes we achieve. They may be good, but what can happen when people interact with each other, and rich systems, allows for more self discovery and ownership. An alternative to social interaction would be practice set in a simulation that’s richer and with some randomness that mimics the variations seen in the real world that go beyond our specific designs.
By creating this richness through interpersonal interaction via dialogue and different viewpoints, or through simulations, we create experiences that go beyond our limitations in specific design. It certainly may go beyond our resources: branching scenarios and asynchronous independent learning are understandably more pragmatic, but when we can, and when the learning outcomes we need are richer than we can suitably address in a direct fashion, say when we need flexible adaptation to circumstances, we should consider designing emergent experiences. And I’m inclined to think that social learning is the cheaper way to go than a complex system-generated experience.
I’m just thinking out loud here, a tangent sparked by a juxtaposition, part of my ongoing efforts to make sense of the world and apply that to creating more resilient and successful organizations. Based upon the above, I think emergent experiences can create more adaptable and flexible learning, and I think that’s increasingly needed. I welcome your thoughts, reflections, pointers, disagreements, and more.
