Graham Roberts kicked off the 2nd day of the Realities 360 conference talking about the Future of Immersive Storytelling. He told about their experiences and lessons building an ongoing suite of experiences. From the first efforts through to the most recent it was insightful. The examples were vibrant inspirations.
Cognition external
I was thinking a bit about distributed cognition, and recognized that there as a potentially important way to tease that apart. And I’ll talk it out first here, and maybe a diagram will emerge. Or not. The point is to think about how external tools can augment our thinking. Or, really, a way that at least partly, we have cognition external.
The evidence says that our thinking isn’t completely in our head. And I’ve suggested that that makes a good case for performance support. But I realize it goes further in ways I’ve thought about it elsewhere. So I want to pull those together.
The alternative to performance support, a sort of cognitive scaffolding, is to think about representation. Here we’re not necessarily supporting any particular performance, but instead supporting developing thinking. I shared Jane Hart’s diagram yesterday, and I know that it’s a revision of a prior one. And that’s important!
The diagram is capturing her framework, and such externalizations are a way to share; they’re a social as well as artifactual sharing. It’s part of a ‘show your work‘ approach to continuing to think. Of course, it doesn’t have to be social, it can be personal.
So both of these forms of distributed cognition are externalizing our thinking in ways that our minds have trouble comprehending. We can play around with relationships by spatially representing them. We can augment our cognitive gaps both formally through performance support, and informally by supporting externalizing our thinking. Spreadsheets are another tool to externalize our thinking. So, too, for that matter, is text.
So we can augment our performance, and scaffold our thinking. Both can be social or solitary, but they both qualify as forms of distributed cognition (beyond social). And, importantly, both then should be consciously considered in thinking about revolutionizing L&D. We should be designing for cognition external. The tools should be there, and the facilitation, to use either when appropriate. So, think distributed, as well as situated, and social. It’s how our brains work, we ought to use that as a guide. You think?
Labels for what we do
Of late there’s been a resurrection of a long term problem. While it’s true for our field as a whole, it’s also true for the specific job of those who design formal learning. I opined about the problem of labels for what we do half a year ago, but it has raised its head again. And this time, some things have been said that I don’t fully agree with. So, it’s time again to weigh in again.
So, first, Will Thalheimer wrote a post in which he claims to have the ultimate answer (in his usual understated way ;). He goes through the usual candidates of labels for what we do – instructional designer, learning designer, learner experience designer – and finds flaws.
And I agree with him on learning designer and instructional designer. We can’t actually design learning, we can only create environments where learning can happen. It’s a probabilistic game. So learning designer is out.
Instructional designer, then, would make sense, but…it’s got too much baggage. If we had a vision of instruction that included the emotional elements – the affective and conative components – I could buy it. And purists will say they do (at least, ones influenced by Keller). But I will suggest that the typical vision is of a behavioristic approach. That is, with a rigorous focus on content and assessment, and less pragmatic approaches to spacing and flexibility.
He doesn’t like learning engineer for the same reason as learning designer: you can’t ‘engineer’ learning. I don’t quite agree. One problem is that right now there are two interpretations of learning engineer. My original take on that phrase was that it’s about applying learning science to real problems. Just as a civic engineer applies physics…and I liked that. Though, yes, you can lead learners to learning, but you can’t make them think.
However, Herb Simon’s original take (now instantiated in the IEEE’s initiative on learning engineering) focused more on the integration of learning science with digital engineering. And I agree that’s important, but I’m not sure one person needs to be able to do it all. Is the person who engineers the underlying content engine the same one as the person who designs the experiences that are manifest out of that system? I think the larger picture increasingly relies on teams. So I’m taking that out of contention for now.
Will’s answer: learning architect. Now, in my less-than-definitive post last year, I equated learning experience designer and learning architect, roughly. However, Will disparages the latter and heaps accolades on the former. My concern is that architects design a solution, but then it gets not only built by others, but gets interior designed by others, and… It’s too ‘hands off’! And as I pointed out, I’ve called myself that recently, but in that role I may have been more an architect ;).
His argument against learning experience designer doesn’t sit well with me. Ignoring the aspersions cast against those who he attributes the label to, his underlying argument is that just designing experiences isn’t enough. He admits we can’t ensure learning, but suggests that this is a weak response. And here’s where I disagree. I think the inclusion of experience does exactly what I want to focus on: the emotional trajectory and the motivational commitment. Not to the exclusion of the learning sciences, of course. AND, I’d suggest, also recognizing that the experience is not an event, but an extended set of activities. Specifically, it will be across technologies as needed.
The problem, as Jane Bozarth raised in a column, is more than just this, however. What research into the role shows is that there are just too many jobs being lumped under the label (whatever it is). Do you develop too? Do you administer the LMS? The list goes on.
I think we need to perhaps have multiple job titles. We can be an instructional designer, or a learning experience designer, or an instructional technologist. Or even a learning engineer (once that’s clear ;). But we need to keep focused, and as Jane advised, not get too silly (wizard?). It’s hard enough as it is to describe what we do without worrying about labels for it. I think I’ll stick with learning experience designer for now. (Not least because I’m running a workshop on learning experience design at DevLearn this fall. ;) That’s my take, what’s yours?
New reality
I’ve been looking into ‘realities’ (AR/VR/MR) for the upcoming Realities 360 conference (yes, I’ll be speaking). And I found an interesting model that’s new to me, and of course prompts some thoughts. For one, there’s a new reality that I hadn’t heard of! So, of course, I thought I’d share.
The issue is how do AR (augmented reality) and VR (virtual reality) relate, and what is MR (mixed reality). The model I found (by Milgram, my diagram slightly relabels) puts MR in the middle between reality and virtual reality. And I like how it makes a continuum here.
So this is the first I have heard of ‘augmented virtuality’ (AV). AR is the real world with some virtual scaffolding. AV has more of the virtual world with a little real world scaffolding. A virtual cooking school in a real kitchen is an example. The virtual world guides the experience, instead of the real world.
The core idea to me is about story. If we’re doing this with a goal, what is the experience driver? What is pushing the goal? We could have a real task that we’re layering AR on top of to support success (more performance support than learning). In VR, we totally have to have a goal in the simulated world. AV strikes me as something that has a virtual world created story that uses virtual images and real locations. Kind of like The Void experience.
This reminded me of the Augmented Reality Games (ARGs) that were talked about quite a bit back in the day. They can be driven by media, so they’re not necessarily limited to locations. A colleague had built an engine that would allow experiences driven by communications technologies: text messages, email, phone calls, and these days we could add in tweets and posts on social media and apps. These, on principle, are great platforms for learning experiences, as they’re driven by the tools you’d actually use to perform. (When I asked my colleagues why they think they’ve ‘disappeared’, the reason was largely cost; that’s avoidable I believe.)
I like this continuum, as it puts ARGs and VR and AR in a conceptually clear framework. And, as I argue for extensively, good models give us principled bases for decisions and design. Here we’ve got a way to think about the relationship between story and technology that will let us figure out what makes the best approach for our goals. This new reality (and the others) will be part of my presentation next month. We’ll see how it manifests by then ;).
Learning Lessons
So, I just finished teaching a mobile learning course online for a university. My goal was not to ‘teach’ mobile so much as develop a mobile mindset. You have to think differently than what the phrase ‘mobile learning’ might lead you to think. And, not surprisingly, some things went well, and some thing didn’t. I thought I’d share the learning lessons, both for my own reflection, and for others.
As a fan of Nilson’s Specifications Grading, I created a plan for how the assessment would go. I want lots of practice, less content. And I do believe in checking knowledge up front, then having social learning, and a work product. Thus, each week had a repeated structure of each element. It was competency based, so you either did it or not. No aggregation of points, but instead: you get this grade if you do: this many assignments correct, and write a substantive comment in a discussion board and comment on someone else’s this many times, and complete this level on this many knowledge checks. And I staggered the deadlines through the week, so there’d be reactivation. I’ve recommended this scheme on principle, and think it worked out good in practice, and I’d do it again.
In many ways it ‘teacher proofs’ the class. For one, the students are giving each other feedback in the discussion question. The choice of discussion question and assignment both were designed to elicit the necessary thinking, which makes the marking of the assignment relatively easy. And the knowledge checks set a baseline background. Designing them all as scenario challenges was critical as well.
And I was really glad I mixed things up. In early weeks, I had them look at apps or evaluated ones that they liked. For the social week, I had them collaborate in pairs. In the contextual week, they submitted a video of themselves. They had to submit an information architecture for the design week. And for the development week, they tested it. Thus, each assignment was tied to mobile.
It was undermined by a couple of things. First, the LMS interfered. I wrote careful feedback for each wrong answer for each question on the knowledge checks. And, it turns out, the students weren’t seeing it! (And they didn’t let me know ’til the 2nd half of the abbreviated semester!) There’s a flag I wasn’t setting, but it wasn’t the default! (Which was a point I then emphasized in the design week: start with good defaults!)
And, I missed making the discussions ‘gradeable’ until late because of another flag. That’s at least partly on me. Which meant again they weren’t getting feedback, and that’s not good. And, of course, it wasn’t obvious ’til I remedied it. Also, my grading scheme doesn’t fit into the default grading schema of the LMS anyways, so it wasn’t automatically doable anyways. Next time, I would investigate that and see if I could make it more obvious. And learn about the LMS earlier. (Ok, so I had some LMS anxiety and put it off…)
With 8 weeks, I broke it up like this:
- Overview: mobile is not courses on a phone. The Four C’s.
- Formal Learning: augmenting learning.
- Performance Support: mobile’s natural niche
- Social: connecting to people ‘on the go’
- Contextual: the unique mobile opportunity
- Design: if you get the design right…
- Development: practicalities and testing.
- Strategy: platform and policy.
And I think this was the right structure. It naturally reactivated prior concepts, and developed the thinking before elaborating.
For the content, I had a small set of readings. Because of a late start, I only found out that I couldn’t use my own mLearning book when the bookstore told me it was out of print (!). That required scrambling and getting approval to use some other writings I’d done. And the late start precluded me from organizing other writings. No worries, minimal was good. And I wrote a script that covered the material, and filmed myself giving a lecture for each week. Then I also provided the transcript.
The university itself was pretty good. They capped the attendance at 20. This worked really well. (Anything else would’ve been a deal breaker after a disaster many years ago when an institution promised to keep it under 32 and then gave me 64 students.) And there was good support, at least during the week, and some support was available even over the weekend.
Overall, despite some hiccups and some stress, I think it worked out (particularly under the constraints). Of course, I’ll have to see what the students say. One other thing I’d do that I didn’t do a good job of generally (I did with a few students) was explain the pedagogy. I’ve learned this in the past, and I should’ve done so, but in the rush to wrestle with the systems, it slipped through the cracks.
Those are my learning lessons. I welcome your feedback and lessons!
Shaming, safety, & misconceptions
Another twitter debate, another blog post. As an outgrowth of a #lrnchat debate, a discussion arose around whether making errors in learning could be a source of shaming. This wasn’t about the learners, however, being afraid of being shamed. Instead it was about whether the designers would feel proscribed from making real errors because of their expectation of learner’s emotions. And, I have strong beliefs about why this is an important issue. Learners should be making errors, for important reasons. So, we need to make it safe!
The importance of errors is in the fact that we’d rather make them in practice than when it counts. Some have argued that we literally have to fail to be ready to learn. (Perhaps almost certainly if the learners are overconfident.) The importance to me is in misconceptions. Our errors don’t tend to be random (there is some randomness), but instead are patterned. They come from systematic ways of perceiving the situation that are wrong. They come from bringing in the wrong models in ways that seem to make sense. And it’s best to address them by being able to make that choice, and getting feedback about why that’s wrong.
Which means learners will have to fail. And they should be able to make mistakes. (Guided) Exploration is good. Learners should be able to try things out, see what the consequences are, and then try other approaches. It shouldn’t be a free-for-all, since learners can not explore systematically. Instead, as I’ve said, learning should be designed action and guided reflection. And that means we should be designing in these alternatives to the right action as options, and provide specific feedback.
So, if they’re failing, is that shaming? Not if we do it right. It’s about making failing okay. It’s about making the learning experience ‘safe‘. Our feedback should be about the decision, and why it’s wrong (referring to the model). We might not give them the right answer, if we want them to try again. But we don’t make it personal, just like good coaching. It’s about what they did, not who they are. So our design should prevent shaming, but by making it safe to fail, not preventing failure.
The one issue that emerged was that there was fear that the designers (or other stakeholders) might have fear that this could be emotionally damaging, perhaps from fears of their own. Er, nope! It’s about the learning, and we know what research tells us works. We have to be responsible to be willing to do what’s right, as challenging as that may be for any reason. Time, money, emotions, what have you. Because, if we want to be responsible stewards of the resources entrusted to us, we should be doing what’s known to be right. Not chasing shiny objects. (At least, until we get the core right. ;)
So, let’s not shame ourselves by letting irrelevant details cloud our judgment. Do the right thing. For the right reasons. We know how to be serious about our learning. Make it so.
Competencies for L&D Processes?
We have competencies for people. Whether it’s ATD, LPI, IBSTPI, IPL, ISPI, or any other acronym, they’ve got definitions for what people should be able to do. And it made me wonder, should there be competencies for processes as well? That is, should your survey validation process, or your design process, also meet some minimum standards? How about design thinking? There are things you do get certified in, including such piffle as MBTI and NLP. So does it make sense to have processes meet minimum standards?
One of the things I do is help orgs fine-tune their design processes. When I talk about deeper elearning, or we take a stand for serious elearning, there are nuances that make a difference. In these cases, I’m looking for the small things that will have the biggest impact. It’s not about trying to get folks to totally revamp their processes (which is a path to failure). Yet, could we go further?
I was wondering whether we should certify processes. Certainly, that happens in other industries. There are safety processes in maintenance, and cleanliness in food operations, and so on. Could and should we have them for learning? For performance consulting, instructional design, performance support design, etc?
Could we state what a process should have as a minimum requirement? Certain elements, at least, at certain way points? You could take Michael Allen’s SAM and use it as a model, for instance. Or Cathy Moore’s Action Mapping. Maybe Julie Dirksen’s Design For How People Learn could be created as such. The point being that we could stipulate some way points in design that would be the minimum to be counted as sufficient for learning to occur. Based upon learning science, of course. You know, deliberate and spaced practice, etc.
Then the question is, should we? Also, could we agree? Or, of course, people could market alternative process certifications. It appears this is what Quality Matters does, for instance, at least K12 and higher ed. It appears IACET does this for continuing education certification. Would an organization certification matter? For customers, if you do customer training? For your courses, if you provide them as a product or service? Would anyone care that you meet a quality standard?
And it could go further. Performance support design, extended learning experience design (c.f. coaching), etc. Is this something that’s better at the person level than the process level?
Should there be certification for compliance with a competency about the quality of the learning design process? Obviously in some areas. The question is, does it matter for regular L&D? On one hand, it might help mitigate against the info dump/knowledge test courses that are the bane of our industry. On the other hand, it might be hard to find a workable definition that could suit the breadth of ways in which people meet learning needs.
All I know is that we have standards about a lot of things. Learning data interchange. Individual competencies. Processes in education. Can and should there be for L&D processes? I don’t know. Seriously. I’m just pondering. I welcome your thoughts.
Reflection on reflection
Of late, there’ve been a few dialogs on Twitter. As I opined in the recent podcast I was interviewed in, using Twitter for a dialog is kind of new. I’m not talking about a tweet chat like #lrnchat (which I think is a great thing), but a out-loud dialog with others weighing in. And it’s fun, and informative, but occasionally I need to go deeper. So here’s a reflection on reflection.
In that podcast interview, I opined, as I often do, about action and reflection. The starting point is a claim is that our own learning action and then reflection. What I mean is that we act in the world, and if we reflect on it we can learn.
One of the pushbacks was that we can learn without reflection. And, yes, I agree. We can learn without conscious feedback. In fact, in Kathy Sierra’s insightful Badass, she talks about chicken sexing, a task which no one’s been able to make consciously accessible. Things can go below consciousness.
This was related to another pushback: do we really learn differently from chickens and rats? And the answer is no, but what we learn is different. And, further, what we can learn is different. I’ve yet to see rats sending rockets up to the moon to see if it’s made of cheese.
Conscious representations facilitate learning, particularly for things we learn that aren’t strongly tied to our evolved survival. Learning about cognition itself, for instance, the ability to think about our own thinking, is just something that separates us fundamentally. And, to do that well, conscious artifacts facilitate it.
We’ve found that creating conscious frameworks to facilitate our understanding and acquisition are helpful. So, specifically, models and examples are two things that help us develop skills. We use models to guide and review our performance, to guide us. M0dels are conceptual relationships that we can compare to our performance. Examples show how those models play out in particular contexts.
There’s a followup: if learning is action and reflection, then instruction should be designed action and guided reflection. That is: do, get feedback, but also more. To me, models and examples are that additional reflection. We can present them ahead of time (but see Problem-Based Learning), but we should use them as part of the feedback, pointing out how flaws in performance didn’t align with the models, and further examples that illustrate those nuances.
Ok, so I may be playing fast and loose with the notion of reflection here, lumping in models and examples and feedback. However, my point is to try to keep learning not being information dump and knowledge test. We know that won’t lead to meaningful change. If I label it action and reflection, we have a better chance to push for an application-based instruction.
So, I’ll stick to my claim about (designed) action and (guided) reflection, with the caveat that my ‘reflection’ is more than just noodling. And, yes, it’s for learning goals beyond ‘hitting your head on rocks hurts’. But the goals I’m focusing on are the types of goals that will make a difference in individual and organizational success in our society. If I’m pushing too far and too hard, let me know.
Exploration & Surprise
Some weeks back, I posted about surprise. That is, a new model that says that that our brains work to minimize surprise. We learn so as not to be wrong. And that made sense in one way, but left another gap. Another article explains (well, partly; the mathematics are more than I want to wade into) further, and that gives me a new handle on thinking about designing transformative experiences. It’s about the value of exploration to accompany surprise.
The problem with the original story of us just minimizing surprise is that this leads to another inference. Why wouldn’t we want to just hang in a dark warm room? The notion of minimizing surprise did explain people who don’t seem keen to learn, but many of us are. And, as Raph Koster told us in A Theory of Fun, the drive to play games seems to be learning! We want exploration, and the outcomes aren’t certain. This is in conflict.
The new article posits that there’s another factor, the expectation of value. We also want the optimal outcome. The theory says that we’ll be willing to try several relatively equal predicted value outcomes to learn which to choose in the future (if I’ve understood the article correctly). So we will explore even under uncertainty if there’s a benefit to learning.
This doesn’t quite explain things to me. I think it’s missing some emotional aspect. Why would we do things like try out Escape Rooms or The Void (as I’ve done with colleagues)? There’s no real outcome, except perhaps to know about such experiences. But horror movies or thrillers? All we know is that we’ll have our emotions raised and then settled. But maybe that fits into a good outcome.
Still, this gives me a new handle. When I was preparing the Learning Experience Design workshop I gave at Learning Solutions last month, I was talking about ensuring surprise. That is, the learning experience should make learners aware that they didn’t know what the outcome would be. But I knew, and suggested, that there had to be more. They had to care about the outcome. And now we have the hook.
They care about the outcome, because it’ll be a higher value situation once they do! If we do this right, we let them know that they care about the outcome, and they can’t do it now (either they know already, or we have them fail). Then, we can offer them the path to achieve this outcome. If they explore, they’ll learn? If we’ve got a truly meaningful outcome (you’ll now be able to do X) that they truly care about (you do want to be able to do X), you’re now set with emotionally ready learners. Cognitive science models suggest that this should work! :)
To turn it around. the point is that you should create a goal that they should desire, and then demonstrate that they don’t already know it. It’s simplistic, but I think it’s part of creating a transformative experience, one where they are not just ready for the outcome, but eager. And I think that’s desirable. What do you think?
Quinnovations
I was talking with my lass, and reminiscing about a few things. And, it occurs to me, that I may not have mentioned them all. Worse, I confess, I’m still somewhat proud of them. So, at the risk of self-aggrandizement, I thought I’d share a few of my Quinnovations. There’s a bigger list here, but this is the ‘greatest hits’ list, with some annotation. (Note, I’ve already discussed the game Quest for Independence, one of my most rewarding works.)
One project was a game based upon my PhD topic. I proposed a series of steps involved in analogical reasoning, and tested them both alone and then after some training. I found some improvement (arguing for the value of meta-learning instruction). During my post-doc, a side project was developing a game that embedded analogical reasoning in a story setting. I created a (non-existent) island, and set the story in the myths of the voodoo culture on it. The goal was a research environment for analogical reasoning; the puzzles in the game required making inferences from the culture. Most players were random, interestingly, at a test, but a couple were systematic.
With a colleague, Anne Forster, we came up with an idea for an online conference to preface a face-to-face event. This was back circa 1996, so there weren’t platforms for such. I secured the programming assistance of a couple of the techs in the office I was working for (Open Net), and we developed the environment. In it, six folks reknown in their area conducted overlapping conversations around their topic. This set up the event, and saw vibrant discussions.
A colleague at an organization I was working for, Access Australia CMC, had come up with the idea of competition for school kids to create websites about a topic. With another colleague, we brainstormed a topic for the first running of the event. In it, we had kids report on innovations in their towns that they could share with other towns (anywhere). I led the design and implementation of the competition: site and announcements, getting it up and running. It ended up generating vibrant participation and winning awards.
Upon my return to the US, I led a team to generate a learning system that developed learners’ understanding of themselves as learners. Ultimately, I conceived of a model whereby we profiled learners as to their learning characteristics (NB: not learning styles) and adapted learning on that basis. There was a lot to it: a content model, rules for adaptation, machine learning for continuing improvement, and more. We got it up and running, and while it evaporated in 2001 (as did the organization we worked for), it’s legacy served me in several other projects. (And, while they didn’t base it on our system, to my knowledge, it’s roughly the same architecture being seen in Newton.)
Using the concept of that adaptive system, with one of my clients we pitched and won the right to develop an electronic performance support system. It ended up being a context-sensitive help system (which is what an EPSS really is ;). I created the initial framework which the team executed against (replacing a help system created by the system engineers, not the right team to do it). The design wrote content into a framework that populated the manual (as prescribed by law) and the help system. The client ended up getting a patent on it (with my name on too ;).
Last one I’ll mention for now, a content system for a publisher. They were going to the next generation of their online tool, and were looking for a framework to: incorporate their existing texts, guide the next generation of texts, and support multiple business models. Again pulling on that content structure experience, I gave them a structured content model that met their needs. The model was supposed to be coupled with a tech platform, and that project collapsed, meaning my model didn’t see the light of day. However, I was pleased to find out subsequently that it had a lasting impact on their subsequent works!
The point being that, in conjunction with clients and partners, I have been consistently generating innovations thru the years. I’m not an academic, tho’ I have been and know the research and theories. Instead, I’m a consultant who comes in early, applies the frameworks to come up with ideas that are both good and unique (I capitalize a lot on models I’ve collected over the years), and gets out quickly when I’m no longer adding value. Clients get an outcome that is uniquely appropriate, innovative, and effective. Ideas they likely wouldn’t have come up with on their own! If you’d like to Quinnovate, get in touch!