Neil deGrasse Tyson opened this year’s DevLearn conference. A clear crowd favorite, folks lined up to get in (despite the huge room). In a engaging, funny, and poignant talk, he made a great case for science and learning.
29 October 2014
28 October 2014
While our cognitive architecture has incredible capabilities (how else could we come up with advances such as Mystery Science Theater 3000?), it also has limitations. The same adaptive capabilities that let us cope with information overload in both familiar and new ways also lead to some systematic flaws. And it led me to think about the ways in which we support these limitations, as they have implications for designing solutions for our organizations.
The first limit is at the sensory level. Our mind actually processes pretty much all the visual and auditory sensory data that arrives, but it disappears pretty quickly (within milliseconds) except for what we attend to. Basically, your brain fills in the rest (which leaves open the opportunity to make mistakes). What do we do? We’ve created tools that allow us to capture things accurately: cameras and microphones with audio recording. This allows us to capture the context exactly, not as our memory reconstructs it.
A second limitation is our ‘working’ memory. We can’t hold too much in mind at one time. We ‘chunk’ information together as we learn it, and can then hold more total information at one time. Also, the format of working memory largely is ‘verbal’. Consequently, using tools like diagramming, outlines, or mindmaps add structure to our knowledge and support our ability to work on it.
Another limitation to our working memory is that it doesn’t support complex calculations, with many intermediate steps. Consequently we need ways to deal with this. External representations (as above), such as recording intermediate steps, works, but we can also build tools that offload that process, such as calculators. Wizards, or interactive dialog tools, are another form of a calculator.
Processing information in short term memory can lead to it being retained in long term memory. Here the storage is almost unlimited in time and scope, but it is hard to get in there, and isn’t remembered exactly, but instead by meaning. Consequently, models are a better learning strategy than rote learning. But external sources like the ability to look up or search for information is far better than trying to get it in the head.
Similarly, external support for when we do have to do things by rote is a good idea. So, support for process is useful and the reason why checklists have been a ubiquitous and useful way to get more accurate execution.
In execution, we have a few flaws too. We’re heavily biased to solve new problems in the ways we’ve solved previous problems (even if that’s not the best approach. We’re also likely to use tools in familiar ways and miss new ways to use tools to solve problems. There are ways to prompt lateral thinking at appropriate times, and we can both make access to such support available, and even trigger same if we’ve contextual clues.
We’re also biased to prematurely converge on an answer (intuition) rather than seek to challenge our findings. Access to data and support for capturing and invoking alternative ways of thinking are more likely to prevent such mistakes.
Overall, our use of more formal logical thinking fatigues quickly. Scaffolding help like the above decreases the likelihood of a mistake and increases the likelihood of an optimal outcome.
When you look at performance gaps, you should look to such approaches first, and look to putting information in the head last. This more closely aligns our support efforts with how our brains really think, work, and learn. This isn’t a complete list, I’m sure, but it’s a useful beginning.
24 October 2014
As usual, I will be at DevLearn (in Las Vegas) this next week, and welcome meeting up with you there. There is a lot going on. Here’re the things I’m involved in:
- On Tuesday, I’m running an all day workshop on eLearning Strategy. (Hint: it’s really a Revolutionize L&D workshop ;). I’m pleasantly surprised at how many folks will be there!
- On Wednesday at 1:15 (right after lunch), I’ll be speaking on the design approach I’m leading at the Wadhwani Foundation, where we’re trying to integrate learning science with pragmatic execution. It’s at least partly a Serious eLearning Manifesto session.
- On Wednesday at 2:45, I’ll be part of a panel on mlearning with my fellow mLearnCon advisory board members Robert Gadd, Sarah Gilbert, and Chad Udell, chaired by conference program director David Kelly.
Of course, there’s much more. A few things I’m looking forward to:
- The keynotes:
- Neil DeGrasse Tyson, a fave for his witty support of science
- Beau Lotto talking about perception
- Belinda Parmar talking about women in tech (a burning issue right now)
- DemoFest, all the great examples people are bringing
- and, of course, the networking opportunities
DevLearn is probably my favorite conference of the year: learning focused, technologically advanced, well organized, and with the right people. If you can’t make it this year, you might want to put it on your calendar for another!
7 October 2014
A colleague I greatly respect, who has a track record of high impact in important positions, has been a proponent of service science. And I confess that it hadn’t really penetrated. Yet last week I heard about it in a way that resonated much more strongly and got me thinking, so let me share where it’s leading my thinking, and see what you say.
One time I heard something exciting, a concept called interface ‘explorability‘ when I was doing a summer internship at NASA while a grad student. When I brought it back to the lab, my advisor didn’t really resonate. Then, some time later (a year or two) he was discussing a concept and I mentioned that it sounded a lot like that ‘explorability’, and he suddenly wanted to know more. The point being that there is a time when you’re ready to hear a message. And that’s me with service science.
The concept is considering a mutual value generation process between provider and customer, and engineering it across the necessary system components and modular integrations to yield a successful solution. As organizations need to be more customer-centric, this perspective yields processes to do that in a very manageable, measurable way. And that’s the perspective I’d been missing when I’d previously heard about it, but Hastings & Saperstein presented it last week at the Future of Talent event in the form of Service Thinking, which brought the concept home.
I wondered how it compared to Design Thinking, another concept sweeping instructional design and related fields, and it appears to be synergistic but perhaps a superset. While nothing precludes Design Thinking from producing the type of outcome Service Thinking is advocating, I’m inferring that Service Thinking is a bit more systematic and higher level.
The interesting idea for me was to think of bringing Service Thinking to the role of L&D in the organization. If we’re looking systematically at how we can bring value to the customer, in this case the organization, systematically, we have a chance to look at the bigger picture, the Performance & Development view instead of the training view. If we take the perspective of an integrated approach to meeting organizational execution and innovation needs, we may naturally develop the performance ecosystem.
We need to take a more comprehensive approach, where we integrate technology capabilities, resources, and people into an integrated whole. I’m looking at service thinking, as perhaps an integration of the rigor of systems thinking with the creative customer focus of design thinking, as at least another way to get us there. Thoughts?
24 September 2014
I tout the value of learning science and good design. And yet, I also recognize that to do it to the full extent is beyond most people’s abilities. In my own work, I’m not resourced to do it the way I would and should do it. So how can we strike a balance? I believe that we need to use smart heuristics instead of the full process.
I have been talking to a few different people recently who basically are resourced to do it the right way. They talk about getting the right SMEs (e.g. with sufficient depth to develop models), using a cognitive task analysis process to get the objectives, align the processing activities to the type of learning objective, developing appropriate materials and rich simulations, testing the learning and using feedback to refine the product, all before final release. That’s great, and I laud them. Unfortunately, the cost to get a team capable of doing this, and the time schedule to do it right, doesn’t fit in the situation I’m usually in (nor most of you). To be fair, if it really matters (e.g. lives depend on it or you’re going to sell it), you really do need to do this (as medical, aviation, military training usually do).
But what if you’ve a team that’s not composed of PhDs in the learning sciences, your development resources are tied to the usual tools, your budgets far more stringent, and schedules are likewise constrained? Do you have to abandon hope? My claim is no.
I believe that a smart, heuristic approach is plausible. Using the typical ‘law of diminishing returns’ curve (and the shape of this curve is open to debate), I suggest that it’s plausible that there is a sweet spot of design processes that gives you an high amount of value for a pragmatic investment of time and resources. Conceptually, I believe you can get good outcomes with some steps that tap into the core of learning science without following the letter. Learning is a probabilistic game, overall, so we’re taking a small tradeoff in probability to meet real world constraints.
What are these steps? Instead of doing a full cognitive task analysis, we’ll do our best guess of meaningful activities before getting feedback from the SME. We’ll switch the emphasis from knowledge test to mini- and branching-scenarios for practice tasks, or we’ll have them take information resources and use them to generate work products (charts, tables, analyses) as processing. We’ll try to anticipate the models, and ask for misconceptions & stories to build in. And we’ll align pre-, in-, and post-class activities in a pragmatic way. Finally, we’ll do a learning equivalent of heuristic evaluation, not do a full scientifically valid test, but we’ll run it by the SMEs and fix their (legitimate) complaints, then run it with some students and fix the observed flaws.
In short, what we’re doing here are approximations to the full process that includes some smart guesses instead of full validation. There’s not the expectation that the outcome will be as good as we’d like, but it’s going to be a lot better than throwing quizzes on content. And we can do it with a smart team that aren’t learning scientists but are informed, in a longer but still reasonable schedule.
I believe we can create transformative learning under real world constraints. At least, I’ll claim this approach is far more justifiable than the too oft-seen approach of info dump and knowledge test. What say you?
17 September 2014
The eLearning Guild is celebrating it’s 10th year, and is using the opportunity to reflect on what learning will look like 10 years from now. While I couldn’t participate in the twitter chat they held, I optimistically weighed in: “learning in 2024 will look like individualized personal mentoring via augmented reality, AI, and the network”. However, I thought I would elaborate in line with a series of followup posts leveraging the #lrn2024 hashtag. The twitter chat had a series of questions, so I’ll address them here (with a caveat that our learning really hasn’t changed, our wetware hasn’t evolved in the past decade and won’t again in the next; our support of learning is what I’m referring to here):
1. How has learning changed in the last 10 years (from the perspective of the learner)?
I reckon the learner has seen a significant move to more elearning instead of an almost complete dependence on face-to-face events. And I reckon most learners have begun to use technology in their own ways to get answers, whether via the Google, or social networks like FaceBook and LinkedIn. And I expect they’re seeing more media such as videos and animations, and may even be creating their own. I also expect that the elearning they’re seeing is not particularly good, nor improving, if not actually decreasing in quality. I expect they’re seeing more info dump/knowledge test, more and more ‘click to learn more‘, more tarted-up drill-and-kill. For which we should apologize!
2. What is the most significant change technology has made to organizational learning in the past decade?
I reckon there are two significant changes that have happened. One is rather subtle as yet, but will be profound, and that is the ability to track more activity, mine more data, and gain more insights. The ExperienceAPI coupled with analytics is a huge opportunity. The other is the rise of social networks. The ability to stay more tightly coupled with colleagues, sharing information and collaborating, has really become mainstream in our lives, and is going to have a big impact on our organizations. Working ‘out loud’, showing our work, and working together is a critical inflection point in bringing learning back into the workflow in a natural way and away from the ‘event’ model.
3. What are the most significant challenges facing organizational learning today?
The most significant change is the status quo: the belief that an information oriented event model has any relationship to meaningful outcomes. This plays out in so many ways: order-taking for courses, equating information with skills, being concerned with speed and quantity instead of quality of outcomes, not measuring the impact, the list goes on. We’ve become self-deluded that an LMS and a rapid elearning tool means you’re doing something worthwhile, when it’s profoundly wrong. L&D needs a revolution.
4. What technologies will have the greatest impact on learning in the next decade? Why?
The short answer is mobile. Mobile is the catalyst for change. So many other technologies go through the hype cycle: initial over-excitement, crash, and then a gradual resurgence (c.f. virtual worlds), but mobile has been resistant for the simple reason that there’s so much value proposition. The cognitive augmentation that digital technology provides, available whenever and wherever you are clearly has benefits, and it’s not courses! It will naturally incorporate augmented reality with the variety of new devices we’re seeing, and be contextualized as well. We’re seeing a richer picture of how technology can support us in being effective, and L&D can facilitate these other activities as a way to move to a more strategic and valuable role in the organization. As above, also new tracking and analysis tools, and social networks. I’ll add that simulations/serious games are an opportunity that is yet to really be capitalized on. (There are reasons I wrote those books :)
5. What new skills will professionals need to develop to support learning in the future?
As I wrote (PDF), the new skills that are necessary fall into two major categories: performance consulting and interaction facilitation. We need to not design courses until we’ve ascertained that no other approach will work, so we need to get down to the real problems. We should hope that the answer comes from the network when it can, and we should want to design performance support solutions if it can’t, and reserve courses for only when it absolutely has to be in the head. To get good outcomes from the network, it takes facilitation, and I think facilitation is a good model for promoting innovation, supporting coaching and mentoring, and helping individuals develop self-learning skills. So the ability to get those root causes of problems, choose between solutions, and measure the impact are key for the first part, and understanding what skills are needed by the individuals (whether performers or mentors/coaches/leaders) and how to develop them are the key new additions.
6. What will learning look like in the year 2024?
Ideally, it would look like an ‘always on’ mentoring solution, so the experience is that of someone always with you to watch your performance and provide just the right guidance to help you perform in the moment and develop you over time. Learning will be layered on to your activities, and only occasionally will require some special events but mostly will be wrapped around your life in a supportive way. Some of this will be system-delivered, and some will come from the network, but it should feel like you’re being cared for in the most efficacious way.
In closing, I note that, unfortunately,my Revolution book and the Manifesto were both driven by a sense of frustration around the lack of meaningful change in L&D. Hopefully, they’re riding or catalyzing the needed change, but in a cynical mood I might believe that things won’t change near as much as I’d hope. I also remember a talk (cleverly titled: Predict Anything but the Future :) that said that the future does tend to come as an informed basis would predict with an unexpected twist, so it’ll be interesting to discover what that twist will be.
16 September 2014
Fall always seems to be a busy time, and I reckon it’s worthwhile to let you know where I’ll be in case you might be there too! Coming up are a couple of different events that you might be interested in:
September 28-30 I’ll be at the Future of Talent retreat at the Marconi Center up the coast from San Francisco. It’s a lovely spot with a limited number of participants who will go deep on what’s coming in the Talent world. I’ll be talking up the Revolution, of course.
October 28-31 I’ll be at the eLearning Guild’s DevLearn in Las Vegas (always a great event; if you’re into elearning you should be there). I’ll be running a Revolution workshop (I believe there are still a few spots), part of a mobile panel, and talking about how we are going about addressing the challenges of learning design at the Wadhwani Foundation.
November 12-13 I’ll be part of the mLearnNow event in New Orleans (well, that’s what I call it, they call it LearnNow mobile blah blah blah ;). Again, there are some slots still available. I’m honored to be co-presenting with Sarah Gilbert and Nick Floro (with Justin Brusino pulling strings in the background), and we’re working hard to make sure it should be a really great deep dive into mlearning. (And, New Orleans!)
There may be one more opportunity, so if anyone in Sydney wants to talk, consider Nov 21.
Hope to cross paths with you at one or more of these places!
28 August 2014
Kris Duggan spoke on gamification at the Bay Area Learning Design & Technology MeetUp. He talked about some successes at his Badging role and then his new initiative bringing gamification more intrinsically into organizations. He proposed five Goal Science rules that resonated with other principles I’ve heard for good organizations.
27 August 2014
This is a name you’re not likely to know, but I can’t let his passing go without comment. Joe was an intensely private person who had a sizable impact on the field of technology in learning, and I was privileged to know him.
I met Joe when my colleague Jim “Sky” Schuyler, who had hired for my first job out of college, subsequently dragged me back from overseas to work for him at a new company: Knowledge Universe Interactive Studios (KUIS). I’d stayed in touch with Sky over the years, and I was looking to come back at the same time he had been hired to lead KUIS’s work to be an ISP for the KU companies, but also to create a common platform. I was brought on board to lead the latter initiative.
To make a long story short, initially I reported to Sky, but ultimately he moved on and I began to report to the CEO, Joe, directly. Sky had said that he liked working for people smarter than himself, and if indeed Joe was such this was quite the proposition, as Sky was not only a Northwestern PhD but a wise colleague in many ways. He’d been a mentor and friend as well as a colleague, and if Sky (reticent as he is) thought highly of Joe, this was high praise indeed.
I got to know Joe slowly. He was quite reserved not only personally but professionally, but he did share his thinking. It quickly became clear that not only did he have the engineering chops of a true techy, he also had the strategic insight of an visionary executive. What I learned more slowly was that he was not just a natural leader, but a man with impeccable integrity and values.
I found out that he’d been involved with Plato via his first job at Battelle, and was suitably inspired to start a company supporting Plato. He moved to the Bay Area to join Atari, and subsequently was involved with Koala Technologies, which created early PC (e.g. Apple) peripherals. His trajectory subsequently covered gaming as well as core technology, eventually ending up at Sega before he convinced the KU folks to let him head up KUIS. He seemed to know everyone.
More importantly, he had the vision to understand system and infrastructure, and barriers to same. He was excited about Plato as a new capability for learning. He supported systems at Koala for new interface devices. He worked to get Sega to recognize the new landscape. In so many ways he worked behind the scenes to enable new experiences, but he was never at the forefront of the public explanation, preferring to make things happen at the back end (despite the fact that he was an engaging speaker: tall, resonant voice, and compelling charisma).
In my short time to get to know him, he shared his vision on a learning system that respected who learners were, and let me shape a team that could (and did) deliver on that vision. He fought to give us the space and the resources, and asked the tough questions to make sure we were focused. We got a working version up and running before the 2001 crash.
He continued to have an impact, leading some of the major initiatives of Linden Labs as they went open source and met some challenging technical issues while negotiating cultural change to take down barriers. He ended up at SportVision, where he was beginning to help them understand they were not about information, but insight. Unfortunately, I didn’t have much view into what was happening there, as it was proprietary and Joe was, as I said, private.
Joe served as a mentor for me. I found him to have deep values, and under his austere exterior he exemplified values of humanity and compassion. I was truly grateful when I could continue to meet him regularly and learn from him as he expressed true interest as well as sharing his insights.
He was taken from us too early, and too quickly. He fought a tough battle at the end, but at least was surrounded by the love of his life and their children as they passed. Rest in Peace.
Update: there’s a memorial site for Joe, http://www.josephbmilleriii.com where you can leave thoughts, view pictures, and more. RIP.
3 July 2014
In the course of answering a question in an interview, I realized a third quip to complement two recent ones. The earliest one (not including my earlier ‘Quips‘) was “curation trumps creation”, about how you shouldn’t spend the effort to create new resources if you’ve already got them. The second one was “from the network, not your work”, about how if your network can have the answer, you should let it. So what’s this new one?
While I’ve previously argued that good learning design shouldn’t take longer, that was assuming good design in the first place: that you did an analysis, and concept and example design and presentation, and practice, not just dumping a quiz on top of content. However, doing real design, good or bad, should take time. And if it’s about knowledge, not skills, a course doesn’t make sense. In short, doing courses should be reserved for when they are really needed.
Too often, we’re making courses trying to get knowledge into people’s heads, which usually isn’t a good idea, since our brains aren’t good at remembering rote information. There are times when it’s necessary, rarely (e.g. medical vocabulary), but we resort to that solution too often as course tools are our only hammer. And it’s wrong.
We should be trying to put information in the world, and reserve the hard work of course building when it’s proprietary skills sets we’re developing. If someone else has done it, don’t feel like you have to use your resources to do it again, use your resources to go meet other needs: more performance support, or facilitating cooperation and communication.
So, for both principled and pragmatic reasons, you should be looking to resources as a solution before you turn to courses. On principle, they meet different needs, and you shouldn’t use the course when (most) needs can be met with resources. Pragmatically, it’s a more effective use of your resources: staff, time, and money.