Roger gave his passioned, opinionated, irreverent, and spot-on talk to kick off LearnTechAsia. He covered the promise (or not) of AI, learning, stories, and the implications for education.
Showing the World
One of the positive results of investigations into making work more effective has been the notion of transparency, which manifests as either working and learning ‘out loud‘, or in calls to Show Your Work. In these cases, it’s so people can know what you’re doing, and either provide useful feedback or learn from you. However, a recent chat in the L&D Revolution group on LinkedIn on Augmented Reality (AR) surfaced another idea.
We were talking about how AR could be used to show how to do things, providing information for instance on how to repair a machine. This has already been seen in examples by BMW, for instance. But I started thinking about how it could be used to support education, and took it a bit further.
So many years ago, Jim Spohrer proposed WorldBoard, a way to annotate the world. It was like the WWW, but it was location specific, so you could have specific information about a place at the place. And it was a good idea that got some initial traction but obviously didn’t continue.
The point, however, would be to ‘expose’ the world. In particular, given my emphasis on the value of models, I’d love to have models exposed. Imagine what we could display:
- the physiology of an animal we’re looking at to flows of energy in an ecosystem
- the architectural or engineering features of a building or structure
- the flows of materials through a manufacturing system
- the operation of complex devices
The list goes on. I’ve argued before that we should expose our learning designs as a way to hand over learning control to learners, developing their meta-learning skills. I think if we could expose how things work and the thinking behind them, we’d be boosting STEM in a big way.
We could go further, annotating exhibits and performances as well. And it could be auditory as well, so you might not need to have glasses, or you could just hold up the camera and see the annotations on the screen. You could of course turn them on or off, and choose which filters you want.
The systems exist: Layar commercially, ARIS in the open source space (with different capabilities). The hard part is the common frameworks, agreeing what and how, etc. However, the possibilities to really raise understanding is very much an opportunity. Making the workings of the world visible seems to me to be a very intriguing possibility to leverage the power we now hold in our hand. Ok, so this is ‘out there’, but I hope we might see this flourishing quickly. What am I missing?
The Polymath Proposition
At the recent DevLearn conference, one of the keynotes was Adam Savage. And he said something that gave me a sense of validation. He was talking about being a polymath, and I think that’s worth understanding.
His point was that his broad knowledge of a lot of things was valuable. While he wasn’t the world’s expert in any particular thing, he knew a lot about a lot of things. Now if you don’t know him, it helps to understand that he’s one of the two hosts of Mythbusters, a show that takes urban myths and puts them to the test. This requires designing experiments that fit within pragmatic constraints of cost and safety, and will answer the question. Good experiment design is an art as well as a science, and given the broad range of what the myths cover, this ends up requiring a large amount of ingenuity.
The reason I like this is that my interests vary broadly (ok, I’m coming to terms with a wee bit of ADD ;). The large picture is how technology can be designed to help us think, work, and learn. This ends up meaning I have to understand things like cognition and learning (my Ph.D. is in cognitive psychology), computers (I’ve programmed and designed architectures at many levels), design (I’ve looked at usability, software engineering, industrial design, architectural design, and more), and organizational issues (social, innovation…). It’s led to explorations covering things like games, mobile, and strategy (e.g. the topics of my books). And more; I’ve led development of adaptive learning systems, content models, learning content, performance support, social environments, and so on. It’s led me further, too, exploring org change and culture, myth and ritual, engagement and fun, aesthetics and media, and other things I can’t even recall right now.
And I draw upon models from as many fields as I can. My Ph.D. research was related to the power of models as a basis for solving new problems in uncertain domains, and so I continue to collect them like others collect autographs or music. I look for commonalities, and try to make my understanding explicit by continuing to diagram and write about my reflections. I immodestly think I draw upon a broad swath of areas. And I particularly push learning to learn and meta-cognition to others because it’s been so core to my own success.
What I thrive on is finding situations where the automatic solutions don’t apply. It’s not just a clear case for ID, or performance support, or… Where technology can be used (or used better) in systemic ways to create new opportunities. Where I really contribute is where it’s clear that change is needed, but what, how, and where to start aren’t obvious. I’ve a reliable track record of finding unique, and yet pragmatic solutions to such situations, including the above named areas I’ve innovated in. And it is a commitment of mine to do so in ways that pass on that knowledge, to work in collaboration to co-develop the approach and share the concepts driving it, to hand off ownership to the client. I’m not looking for a sinecure; I want to help while I’m adding value and move on when I’m not. And many folks have been happy to have my assistance.
It’s hard for me to talk about myself in this way, but I reckon I bring that polymath ability of a broad background to organizations trying to advance. It’s been in assisting their ability to develop design processes that yield better learning outcomes, through mobile strategies and solutions that meet their situation, to overarching organizational strategies that map from concepts to system. There’s a pretty fair track record to back up what I say.
I am deep in a lot of areas, and have the ability to synthesize solutions across these areas in integrated ways. I may not be the deepest in any one, but when you need to look across them and integrate a systemic solution, I like to think and try to ensure that I’m your guy. I help organizations envision a future state, identify the benefits and costs, and prioritize the opportunities to define a strategy. I have operated independently or with partners, but I adamantly retain my freedom to say what I truly think so that you get an unbiased response from the broad suite of principles I have to hand. That’s my commitment to integrity.
I didn’t intend this to be a commercial, but I did like his perspective and it made me reflect on what my own value proposition is. I welcome your thoughts. We now return you to your regularly scheduled blog already in progress…
Supporting our Brains
One of the ways I’ve been thinking about the role mobile can play in design is thinking about how our brains work, and don’t. It came out of both mobile and the recent cognitive science for learning workshop I gave at the recent DevLearn. This applies more broadly to performance support in general, so I though I’d share where my thinking is going.
To begin with, our cognitive architecture is demonstrably awesome; just look at your surroundings and recognize your clothing, housing, technology, and more are the product of human ingenuity. We have formidable capabilities to predict, plan, and work together to accomplish significant goals. On the flip side, there’s no one all-singing, all-dancing architecture out there (yet) and every such approach also has weak points. Technology, for instance, is bad at pattern-matching and meaning-making, two things we’re really pretty good at. On the flip side, we have some flaws too. So what I’ve done here is to outline the flaws, and how we’ve created tools to get around those limitations. And to me, these are principles for design:
So, for instance, our senses capture incoming signals in a sensory store. Which has interesting properties that it has almost an unlimited capacity, but for only a very short time. And there is no way all of it can get into our working memory, so what happens is that what we attend to is what we have access to. So we can’t recall what we perceive accurately. However, technology (camera, microphone, sensors) can recall it all perfectly. So making capture capabilities available is a powerful support.
Similar, our attention is limited, and so if we’re focused in one place, we may forget or miss something else. However, we can program reminders or notifications that help us recall important events that we don’t want to miss, or draw our attention where needed.
The limits on working memory (you may have heard of the famous 7 ±2, which really is <5) mean we can’t hold too much in our brains at once, such as interim results of complex calculations. However, we can have calculators that can do such processing for us. We also have limited ability to carry information around for the same reasons, but we can create external representations (such as notes or scribbles) that can hold those thoughts for us. Spreadsheets, outlines, and diagramming tools allow us to take our interim thoughts and record them for further processing.
We also have trouble remembering things accurately. Our long term memory tends to remember meaning, not particular details. However, technology can remember arbitrary and abstract information completely. What we need are ways to look up that information, or search for it. Portals and lookup tables trump trying to put that information into our heads.
We also have a tendency to skip steps. We have some randomness in our architecture (a benefit: if we sometimes do it differently, and occasionally that’s better, we have a learning opportunity), but this means that we don’t execute perfectly. However, we can use process supports like checklists. Atul Gawande wrote a fabulous book on the topic that I can recommend.
Other phenomena include that previous experience can bias us in particular directions, but we can put in place supports to provide lateral prompts. We can also prematurely evaluate a solution rather than checking to verify it’s the best. Data can be used to help us be aware. And we can trust our intuition too much and we can wear down, so we don’t always make the best decisions. Templates, for example are a tool that can help us focus on the important elements.
This is just the result of several iterations, and I think more is needed (e.g. about data to prevent premature convergence), but to me it’s an interesting alternate approach to consider where and how we might support people, particularly in situations that are new and as yet untested. So what do you think?
AI and Learning
At the recent DevLearn, Donald Clark talked about AI in learning, and while I largely agreed with what he said, I had some thoughts and some quibbles. I discussed them with him, but I thought I’d record them here, not least as a basis for a further discussion.
Donald’s an interesting guy, very sharp and a voracious learner, and his posts are both insightful and inciteful (he doesn’t mince words ;). Having built and sold an elearning company, he’s now free to pursue what he believes and it’s currently in the power of technology to teach us.
As background, I was an AI groupie out of college, and have stayed current with most of what’s happened. And you should know a bit of the history of the rise of Intelligent Tutoring Systems, the problems with developing expert models, and current approaches like Knewton and Smart Sparrow. I haven’t been free to follow the latest developments as much as I’d like, but Donald gave a great overview.
He pointed to systems being on the verge of auto parsing content and developing learning around it. He showed an example, and it created questions from dropping in a page about Las Vegas. He also showed how systems can adapt individually to the learner, and discussed how this would be able to provide individual tutoring without many limitations of teachers (cognitive bias, fatigue), and can not only personalize but self-improve and scale!
One of my short-term problems was that the questions auto-generated were about knowledge, not skills. While I do agree that knowledge is needed (ala VanMerriënboer’s 4CID) as well as applying it, I think focusing on the latter first is the way to go.
This goes along with what Donald has rightly criticized as problems with multiple-choice questions. He points out how they’re largely used as knowledge test, and I agree that’s wrong, but while there are better practice situations (read: simulations/scenarios/serious games), you can write multiple choice as mini-scenarios and get good practice. However, it’s as yet an interesting research problem, to me, to try to get good scenario questions out of auto-parsing content.
I naturally argued for a hybrid system, where we divvy up roles between computer and human based upon what we each do well, and he said that is what he is seeing in the companies he tracks (and funds, at least in some cases). A great principle.
The last bit that interested me was whether and how such systems could develop not only learning skills, but meta-learning or learning to learn skills. Real teachers can develop this and modify it (while admittedly rare), and yet it’s likely to be the best investment. In my activity-based learning, I suggested that gradually learners should take over choosing their activities, to develop their ability to become self-learners. I’ve also suggested how it could be layered on top of regular learning experiences. I think this will be an interesting area for developing learning experiences that are scalable but truly develop learners for the coming times.
There’s more: pedagogical rules, content models, learner models, etc, but we’re finally getting close to be able to build these sorts of systems, and we should be aware of what the possibilities are, understanding what’s required, and on the lookout for both the good and bad on tap. So, what say you?
Connie Yowell #DevLearn Keynote Mindmap
Modelling
So, I found an interesting inconsistency. I had to submit my deck for my DevLearn workshop on Cognitive Science for Learning Design last week, but oddly, for every thing I was recommending I had a diagram, except for the notion of using models. This is ironic, since diagrams can be used to convey models. It bugged me, so I pondered.
And then I remembered that I gave a presentation years ago specifically on diagrams. Moreover, in that presentation I had a diagram for a process for creating a diagram (Department of Redundancy Department). So, I finally got around to trying to apply my own process to my lack of a model. And voilà :
The process is to identify the elements, and the relationships, and then additional dimensions. Then you represent each, place them (elements first, relationships second, dimensions last), and tune.
Here the notion is that you have a mental model of a concept, capturing elements and causal relationships. When you see a situation, you select a model where you can map the elements in the model to elements in the context. Then you can use the model to predict what will happen or explain what happened. Which gives you a basis for making decisions, and adapting decisions to different contexts in principled ways.
Models are a powerful concept I’ve harped on before, but now I’ve an associated diagram. And I like diagrams. I find mapping the conceptual dimensions to spatial dimensions both helps me get concrete about the models and then gives a framework to share with others. Does this make sense to you, both the concept behind it, and the diagram to represent it?
I’ll be presenting this in the workshop, amongst many other implications from how our brains work (and learn) to the design of learning experiences. Would love to see you there.
Concrete and Contextual
I’m working on the learning science workshop I’m going to present at DevLearn next month, and in thinking about how to represent the implications of designing to account for how we work better when the learning context is concrete and sufficient contexts are used, I came up with this, which I wanted to share.
The empirical data is that we learn better when our learning practice is contextualized. And if we want transfer, we should have practice in a spread of contexts that will facilitate abstraction and application to all appropriate settings, not just the ones seen in the learning experience. If the space between our learning applications is too narrow, so too will our transfer be. So our activities need to be spread about in a variety of contexts (and we should be having sufficient practice).
Then, for each activity, we should have a concrete outcome we’re looking for. Ideally, the learner is given a concrete deliverable as an outcome that they must produce (that mimics the type of outcome we’re expecting them to be able to create as an outcome of the learning, whether decision, work product, or..). Ideally we’re in a social situation and they’re working as a team (or not) and the work can be circulated for peer review. Regardless, then there should be expert oversight on feedback.
With a focus on sufficient and meaningful practice, we’re more likely to design learning that will actually have an impact. The goal is to have practice that is aligned with how our learning works (my current theme: aligning with how we think, work, and learn). Make sense?
Designing Learning Like Professionals
I’m increasingly realizing that the ways we design and develop content are part of the reason why we’re not getting the respect we deserve. Our brains are arguably the most complex things in the known universe, yet we don’t treat our discipline as the science it is. We need to start combining experience design with learning engineering to really start delivering solutions.
To truly design learning, we need to understand learning science. And this does not mean paying attention to so-called ‘brain science’. There is legitimate brain science (c.f. Medina, Willingham), and then there’s a lot of smoke.
For instance, there’re sound cognitive reasons why information dump and knowledge test won’t lead to learning. Information that’s not applied doesn’t stick, and application that’s not sufficient doesn’t stick. And it won’t transfer well if you don’t have appropriate contexts across examples and practice. The list goes on.
What it takes is understanding our brains: the different components, the processes, how learning proceeds, and what interferes. And we need to look at the right levels; lots of neuroscience is not relevant at the higher level where our thinking happens. And much about that is still under debate (just google ‘consciousness‘ :).
What we do have are robust theories about learning that pretty comprehensively integrate the empirical data. More importantly, we have lots of ‘take home’ lessons about what does, and doesn’t work. But just following a template isn’t sufficient. There are gaps where have to use our best inferences based upon models to fill in.
The point I’m trying to make is that we have to stop treating designing learning as something anyone can do. The notion that we can have tools that make it so anyone can design learning has to be squelched. We need to go back to taking pride in our work, and designing learning that matches how our brains work. Otherwise, we are guilty of malpractice. So please, please, start designing in coherence with what we know about how people learn.
If you’re interested in learning more, I’ll be running a learning science for design workshop at DevLearn, and would love to see you there.
Meta-learn what?
If, indeed, learning is the new business imperative, what does that mean we need to learn? What are the skills that we want to have, or need to develop? I reckon they fall into two categories; those we do for our own learning, and those for learning with and through others.
When we learn on our own, we need to address what information we want coming in and how we process it. This falls under Harold Jarche’s Personal Knowlege Mastery of Seek – Sense – Share. To me there are two main components: what you actively seek, and what comes to you.
What you actively seek really is your searching abilities. Several things come into play. One is knowing where to look. When do you google, when do you do an internal search, when do you check out a book? And how to look is also a component. Do you know how to make a good search string? Do you know how to evaluate the quality of the responses you get? I see too often that people aren’t critical enough in looking at purveyed information.
Then, you also want to set up a stream of information that comes to you. Who to follow on social media? What streams of information? How do you find what sources others use? How do you track what’s happening in your areas of interest and responsibility without getting overwhelmed? This is personal information management, and it requires active management, as sources change. And there are different strategies for different media, as well.
Note that this crosses over into social, but people don’t necessarily know you’re following them. While there may be a notification, they don’t know how much attention you’re paying. I’ve talked about ‘stealth mentoring’, where you can follow someone’s tweets and blog posts, and they can serve as a mentor for you without even knowing it!
There’s some processing of that information, too. What do you do with it? How do you make sense of it? If you hear X over here, and Y over there, you should try to actively reconcile it (e.g. as I did here with collaboration and cooperation). Do you diagram, write, make a video, ?
Of course, if you do process it, do you share it? Now we’re crossing over into the social space more proactively. There’re good reasons to ‘show your work’; in terms of helping others understand where you’re at in your process and for them to offer help. And sharing your thinking can help others. Your thoughts, even interim, can help you and others sort out your thinking. There are some skills involved in figuring out how to systematically share, and of course some diligence and effort is required too, at least before it becomes a habit.
And, of course, there is explicitly asking for help. There are ways to ask for help that aren’t effective! Similarly, there are ways to offer help that won’t necessarily be taken up. So there are skills involved in communicating.
Similarly, collaboration shouldn’t be taken for granted. Do you know different ways to collaborate on documents, presentations, and spreadsheets? Hint: there are better ways than emailing around files! How do you manage a collaboration process so that it maximizes the outcome? For instance, there are nuances to brainstorming.
There are lots of skills involved, and not only should you develop your own, but you should consider the benefits to the organization to developing them systematically and systemically. So, what did I miss? Wondering if I should try to diagram this…