I apparently talk about alignment a lot. There’re good reasons, of course. First, I am referring to two different alignments. One is aligning organizations with the way our brains work. Otherwise, we won’t get the best out of people. The other is aligning our learning experience designs with what we know about how our brains work. Also critically important. In this case, prompted as always by conversations, I realized that I wanted to explore the nuances of aligning for the latter.
So, I’ve talked before about how we should make sure we have meaningful objectives, and then align practice to that. I’ve also become enlightened about how important examples are as well. But within both of those, there’s more.
It came up in reviewing a design, looking to refine the approach. In it, there were examples, but they weren’t being used quite systematically enough. Then, the practice also wasn’t quite reflecting what people did. In the course of the conversation, I realized that there were nuances that seemed to be missing.
When you’re focusing on performance, you should be looking at what people will need to be doing. Too often, folks can talk about what they want people to know. However, what matters is what people do. Thus, you really need to dig down into that.
Then, you need to be making sure your examples show people doing whatever it is they need to be able to do. Similarly, you need to be asking people to be doing that, as well. For good examples, they should have a narrative flow, and show the underlying thinking. Good practice should require contextualized decision making like they’ll have to actually perform. Not characterizing the situation, but making decisions based upon those situations. So, not saying “is this an X or a Y situation”, but instead “do you choose action A or B”?
Then, of course, there are the actual choices of situation. The first task should be elementary. It may require scaffolding, so the circumstances might be simple, or some of the task is performed, etc. Then, you systematically add complexity in the task, while also broadening the situations seen. You’re simultaneously supporting both the acquisition of skill, and the ability to transfer to appropriate situations.
Then, of course, you want to make the situations appropriately compelling. That may mean choosing the best stories, some exaggeration, and storytelling. For practice, of course, there’s also the feedback: performance-focused, model-based, and minimal.
Look, I’m not saying this is easy. If it was easy, we’d get AI to do it ;). Yet AI doesn’t, and really can’t, understand the nuances of aligning. We can, and do. Yes, it is somewhat rocket science, done properly. We’re talking about systematically creating change in arguably the most complex thing in the known universe, after all. However, we do have good principles and practices. We just need to make sure we know, and use them.
That’s what makes our field so fascinating and important, after all. The creativity involved is also why it’s fun. Then, we’re also achieving important goals, improving people. We owe it to our stakeholders to do it right. (We are the leaders of the future economy, after all!) That’s my take, what am I missing?
Leave a Reply