Kate O’Neill closed the DevLearn conference with a keynote on tech humanism. With a humorous but insightful presentation, she inspired us to strive for good.
Helen Papagiannis #DevLearn Keynote Mindmap
Helen Papagiannis kicked off the second day of the DevLearn conference. She explored the possibilities of AR with exceptional examples. She went through a variety of concepts, helping us comprehend new opportunities. Exposing the invisible and annotating the world were familiar, but collaborative editing of spatial representations resurrected one of the most interesting (and untapped) potentials of virtual worlds.
Talithia Williams #DevLearn Keynote Mindmap
Talithia Williams presented the afternoon keynote on the opening day of DevLearn. She gave an overview of the possibilities of data, and the basics of data science. She then made some inferences to learning.
Sophia the Robot #DevLearn Keynote Mindmap
DevLearn opened with a keynote from Sophia the Robot. With an initially scripted presentation, and some scripted questions from host David Kelly, Sophia addresses the differences between AI and robots, with a bit of wit. The tech used to make the illusion was explored, but the technology was put to the test with some unscripted questions, and the responses were pretty good. An interesting start!
Play to Learn
Thinking more about Friston’s Free Energy Principle and the implications for learning design prompted me to think about play. What drives us to learn, and then how do we learn? And play is the first answer, but does it extend? Can we play to learn beyond the natural?
The premise behind the Free Energy principle is that organisms (at every level) learn to minimize the distance between our predictions and what actually occurs. And that’s useful, because we use our predictions to make decisions. And it’s useful if our decisions get better and better over time. To do that, we build models of the world.
Now, it’d be possible for us to just sit in a warm dark room, so our predictions are right, but we have drives and needs. Food, shelter, sex, are drives that can at least occasionally require effort. The postulate is that we’ll be driven to learn when the consequences of not learning are higher than the effort of learning.
At this level, animals play to learn things like how to hunt and interact. Parents can help as well. At a higher level than survival, however, can play still work? Can we learn things like finance, mathematics, and other human-made conceptions this way? It’d be nice to make a safe place to ‘play’, to experiment.
Raph Koster, in his A Theory Of Fun, tells us that computer games are fun just because they require learning. You need to explore, and learn new tricks to beat the next level. And computer games can be about surviving in made-up worlds.
The point I’m getting to is that the best learning should be play; low stakes exploration, tapping into the elements of engagement to make the experience compelling. You want a story about what your goal is, and a setting that makes that goal reasonable, and more.
To put it another way, learning should be play. Not trivial, but ‘hard fun’. If we’re not making it safe, and providing guided discovery to internalize the relationships they need, to build the models that will make better decisions, we’re not making learning as naturally aligned as it can be. So please, let your people play to learn. Design learning experiences, not just ‘instruction’.
Writing ongoing
I’ve been doing this blog now for 13.5 years (started in January 2006 with my first post), and have generated over 1600 posts in that time. My intentions about productivity started, perhaps, more ambitious, but they settled down a number of years ago. And, I’m finding, they’ve settled again of late. So it’s time to reflect on my status of writing ongoing!
So, while I started with hoping for one every biz day, I was always happy to get 2 or 3 per week. And I’d pretty much settled on meeting a self-imposed goal of 2 per week. Which I’ve kept that up for years; sometimes, like at confs, I’d do 3 per week because of my mind maps, and occasionally I’d only get 1. Yet more weeks than not of late (say, the past few months), I’ve struggled to come up with even 1. What’s going on (he asks himself)?
Ok, during that time, I’ve written four more books (the first one came out before the blog started). And I’ve worked, and written articles, and traveled and spoken, and more. And that’s still status quo. So why have I slipped of late? What’s changed?
Well, for one, I’ve gone from occasional articles, to a monthly column for the Litmos blog for the past 4.5 years, and now to a second monthly column for Learning Solutions for the past 2.5 years. Yet, I’ve been keeping up until the past months.
One thing has changed. M’lady started working part time, and now is full time. Which is fine, because I am quite capable of some household tasks. The planning meals and cooking haven’t really changed in their demands, but I find I am spending more time on shopping in particular. Though that shouldn’t be such a barrier. And it started a year ago.
I’m likely to have another big writing task upcoming (stay tuned ;), and that tends to generate insights. But overall, I’m not feeling positive I’ll be able to continue achieving two posts a week. At least ’til I understand better what’s going on. I’ll shoot for 2, of course, but I feel like I should be open and say that I may only get to 1 a week. That’s where my writing ongoing seems to be headed. We’ll see.
Endorsements, rigor, & scrutability
I was recently asked to endorse two totally separate things. And it made me reflect on just what my principles for such an action might be. So here’s an gentrified version of my first thoughts on my principles for endorsements:
First, my reputation is based on rigor in thought, and integrity in action. Thus, anyone I‘d endorse both has to be scrutable both in quality of design and in effectiveness in execution.
So, to establish those, I need to do several things.
For one, I have to investigate the product. Not just the top-level concept, but the lower-level details. And this means not only exploring, but devising and performing certain tests.
And that also means investigating the talent behind the design. Who‘s responsible for things like the science behind it and the ultimate design.
In addition, I expect to see rigor in implementation. What‘s the development process? What platform and what approach to development is being used? How is quality maintained? Maintainability? Reliability? I‘d want to talk to the appropriate person.
And I‘d want to know about customer service. What‘s the customer experience? What‘s the commitment?
There‘ve been a couple of orgs that I worked with over a number of years, and I got to know these things about them (and I largely played the learning science role ;), so I could recommend them (tho‘ they didn‘t ask for public endorsements) and help sell them in engagements. And I was honest about the limitations as well.
I have a reputation to maintain, and that means I won‘t endorse ‘average‘. I will endorse, but it‘s got to be scrutable at all levels and exceptional in some way so that I feel I‘m showing something unique and exceptional but will also play out favorably over time. If I recommend it, I need people to be glad if they took my advice. And then there’s got to be some recompense for my contribution to success.
One thing I hadn‘t thought of on the call was a possibility of limited or levels of endorsement. E.g. “This product offers a seemingly unique solution that is valuable in conceptâ€, but not saying “I can happily recommend this approachâ€. Though the value of that is questionable, I reckon.
Am I overreaching in what I expect for endorsements, or does this make sense?
Tools for LXD?
I’ve been thinking on LXD for a while now, not least because I’ve an upcoming workshop at DevLearn in Lost Wages in October. And one of the things I’ve been thinking about are the tools we use for LXD. I’ve created diagrams (such as the Education Engagement Alignment), and quips, but here I’m thinking something else. We know that job aids are helpful; things like checklists, and decision trees, and lookup tables. And I’ve created some aids for the Udemy course on deeper elearning I developed. But here I want to know what you are using as tools for LXD? How do you use external resources to keep your design on track?
The simple rationale, of course, is that there are things our brains are good at, and things they’re not. We are pattern-matchers and meaning-makers, naturally making up explanations for things that happen. We’re also creative, finding solutions under constraints. Our cognitive architecture is designed to do this; to help us adapt to the first-level world we evolved in.
However, our brains aren’t particularly good at the second-level world we have created. Complex ideas require external representation. We’re bad at remembering rote and arbitrary steps and details. We’re also bad at complex calculations. This makes the case for tools that help scaffold these gaps in our cognition.
And, in particular, for design. Design tends to involve complex responses, in this case in terms of an experience design. That maps out over content, time, and tools. Consequently, there are opportunities to go awry. Therefore, tools are a plausible adjunct.
You might be using templates for good design. Here, you’d have a draft storyboard, for instance, that insures you’re including a meaningful introduction, causal conceptual model, examples, etc. Or you might have a checklist that details the elements you should be including. You could have a model course that you use as a reference.
My question, to you, is what tools are you using to increase the likelihood of a quality design, and how are they working for you? I’d like to know what you’ve found helpful as tools for LXD, as I look to create the best support I can. Please share!
Clear about the concept
I went to hear a talk the other day. It was about competency-based education (CBE) for organizations. Ostensibly. And, while I’m now affiliated with IBSTPI, it’s not like I’m a competency expert. And maybe I expect too much, but I really hope for people to be clear about the concept. Alas, that’s not what I found.
So, it started out reasonably well, talking about how competencies are valuable. There were a number of points, and many made sense, although some were redundant. Maybe I missed some nuance? I try to be open-minded. It’s about creating clear definitions of performance, and aligning those with assessments. Thus, you’re working on very clear descriptions of what people should be doing.
It got interesting when the speaker decided to link CBE to Universal Design for Learning (UDL). And it’s a good program. UDL talks about using multiple representations to increase the likelihood for different learners to be able to comprehend and respond. This, in the talk, was mapped to three different segments: engaging the learners in multiple ways, communicating concepts in multiple ways, and allowing assessment in multiple ways. And this is good. For learning. Does it make sense for CBE?
To start, the argument was, you should make the rationale for the learning in multiple ways. While in general CBE inherently embodies meaningfulness in the nature of clear and needed skills, I don’t have a problem with this. I argue you should hook learners in emotionally and cognitively, and those can be separate activities. There was a brief mention of something like ‘learning styles’, but while now wary, I was ready to let it go.
However, the talk went on to make a case for multiple representations of content. And here the slide explicitly said ‘learning styles’ and used VARK. And don’t get me wrong, multiple representations and media are good, but not for learning styles! The current status is that there’s essentially no valid instrument to measure learning styles, and no evidence that even if you did, that it makes a difference. None. So, of course, I raised the issue. And we agreed that maybe not for learning styles, but multiple representations weren’t bad.
The final point was that there could be multiple forms of assessment. At this point, I wasn’t going to interrupt again, but at the end of the session raised the point that the critical element of CBE is aligning the assessment with the performance! You can’t have them do an interpretative dance about identifying fire hazards, for instance, you have to have them identify fire hazards! So, here the audience ultimately agreed that variability was acceptable as long as it measured the actual performance. Again, I don’t think the speaker was clear about the concept.
There were two major flaws in this talk. One was casually mashing up a couple of essentially incommensurate ideas. CBE and UDL aren’t natural partners. There can be overlapping concepts, but… The second, of course, is using a popular but fundamentally flawed myth about learning. If you’re going to claim authority, don’t depend on broken concepts.
To put it another way, I think it’s fair to expect speakers to be clear about the concept. (Either that, or maybe the lesson is that Clark shouldn’t be allowed to listen to normal speakers. ;) Please, please, know what you’re talking about before you talk about it. Is that too much to ask?
Working with you
I was talking with my better half, who’s now working at a nursery. Over time, she has related stories of folks coming to ask for assistance. And the variety is both interesting and instructive. There’s a vast difference of how people can be working with you.
So, for one, she likes to tell stories of people who come in saying “you know, I want something ‘green'”. Or, worse, “I want a big tree that doesn’t require any watering at all”. (Er, doesn’t exist.) The one she told me today was this lady who came in wanting “you know, it’s white and grows like <hand gesture showing curving over like a willow>”. So m’lady showed her a plant fitting the description. But “no, it’s not got white flowers”. It ended up being a milkweed, which isn’t white and stands straight up!
What prompted this reflection was the situation she cited of this other customer. He comes in with a video of the particular section he wants to work on this time, with measurements, and a brief idea of what he’s thinking. Now this is a customer that’s easy to help; you can see the amount of shade, know the size, and have an idea of what the goal is.
I related this (of course ;), to L&D. What you’d like is the person who comes and says “I have this problem: performance should be <desired measurement> but instead it’s only <current measurement>. What steps can we take to see if you can help?” Of course, that’s rare. Instead you get “I need a course on X.” At least, until you start changing the game.
JD Dillon tweeted “…But in real life they can’t just say NO to the people who run the organization. ‘Yes, and …’ is a better way to get people to start thinking differently.” And that’s apt. If you’ve always said “yes”, it’s really not acceptable to suddenly start saying “no”. Saying “Yes and…” is a nice way to respond. Something like “Sure, so what’s the problem you’re hoping this course will solve?”
And, of course, you should be this person too. “Let me tell you why I’d like to buy a VR headset,” and go on to explain how this critical performance piece is spatial and visceral and you want to experiment to address it. Or whatever. Come at it from their perspective, and you have a better chance, I reckon.
You won’t always get the nice customers, but if you take time and work them through the necessary steps at first, maybe you can change them to be working with you. That’s better than working for them, or fighting with them, no?