This morning Elliott interviewed Cathy Casserly from Creative Commons and Dean Kamen. Cathy was a passionate advocate for openness and sharing. She talked about going further into learning, and I was reminded about Project Tin Can, making a more general learning path. Dean recited his interesting childhood and then launched into his inspiring project to make science cool again.
Layered Learning
Last week, I posted about a model where a system could provide a sage who looks at the events of your life and provides support. I want to elaborate that model by looking at it in a different way.
The notion here is that you have events in your life, across the bottom. And you have some learning goals, e.g. to learn about project management, and about running meetings. You might get some initial content about those two goals, but then let’s focus on developing that learning over time.
The events in your life give you a chance to use them as learning experiences, not just performance opportunities. If there are not enough in your life, you might have interstitial activities (those in dashed lines), but you can be developed across learning goals abcd, and uvwxyz, both through delivered experiences, and with learning wrapped around real experiences.
Let me make that latter clearer. Say you’ve got some event like project work, and an associated learning goal (e.g. concept ‘d’ in a curricula). A system could see the calendar entry for the project work and, through tagging or other semantic means, recognize the relationship with learning goal ‘d’. Then, some relevant activation and concept material might precede the event, an aid could appear during, and either a self-evaluation metric or a connection to a live person could happen afterward. Delivered, for instance, through mobile devices.
The goal is to use the events in your life as learning opportunities as much as possible (or preferable). We can also mix in some simulated practice (e.g an alternate reality game) if it’s not occurring at a sufficient rate in real life, but the goal is to match the learning development plan to the rate at which we effectively learn. And, to be clear, we do not learn effectively by a one-off knowledge dump and a quiz, as much of what we do actually works out to be.
As I’ve mentioned before, we have the magic, the sufficiently advanced technology Arthur C. Clarke talked about, to hand. We should start using it to develop us towards our goals in appropriate ways. The opportunity is there; who’s ready to seize it?
President Clinton Keynote Mindmap
Ken Zolot Mindmap
Learning 2011 Interview Mindmaps
Koulopoulos Keynote Mindmap
Michio Kaku Keynote Mindmap
Sage at the Side
A number of years ago, I wrote an article (PDF) talking about how we might go beyond our current ‘apart’ learning experiences. The notion is what I call ‘layered learning’, where we don’t send you away from your life to go attend a learning event, but instead layer it around the events in your life. This is very much part of what I’ve been calling slow learning, and a recent conversation has catalyzed and crystalized that thought.
Think about the sort of ideal learning experience you might have. As you traverse the ‘rocky road’ of life, imagine having a personal coach who would observe the situation, understand the context of the task and the desired goal, and could provide some aid (from some sack of resources) that could assist you in immediate performance. Your performance would improve.
Let’s go further. This sage, moreover, could draw from some curricula (learning trajectories) and prepare you beforehand and guide reflection afterward so that real performance event now becomes a learning opportunity as well, helping you understand why this particular approach makes sense, how to adapt it, and more. In this way, the sage moves from performance coach to learning mentor.
One step further would be to have learning trajectories not only about the domain (e.g. engineering) but also about quality, management, learning, and more. So learners could be developed as learners, and as persons, not just as performers.
Now this would be ideal, but individual mentors don’t scale very well. But here’s the twist: we can build this. We can have curricula, learning objects, and build a sage via rules that can do this. Imagine going through your workday with a device (e.g. an app phone or a small tablet) that knows what you’re doing (from your calendar), which triggers content to be served up before, during, and after tasks, that develops you over time. We can build the tutor, develop and access the curricula and content, deliver it, track it.
I hope this is clear. There are other ways to think about this, and I’ll see if I can’t capture them in some way; stay tuned. The limitations are no longer the technology, the limits are between our ears. Reckon?
Don’t be Complacent and Content
Yesterday I attend SDL’s DITAFest. While it’s a vendor-driven show, there were several valuable presentations and information to help get clearer about designing content. And we do need to start looking at the possibilities on tap. Beyond deeper instructional design (tapping into both emotion and effective instruction, not the folk tales we tell about what good design is), we need to start looking at content models and content architecture.
Let me put this a bit in context. When I talk about the Performance Ecosystem, I’m talking about a number of things: improved instructional design, performance support, social learning and mobile. But the “greater integration” step is one that both yields immediate benefits, and sets the stage for some future opportunities. Here we’re talking investing in the underlying infrastructure to leverage the possibilities of analytics, semantics and more, and content architecture is a part of that.
So DITA is Darwin Information Typing Architecture, and what it is about is structuring content a bit. It’s an XML-based approach developed at IBM that lets you not only separate out content from how it’s expressed, but lets you add some semantics on top of it. This has been mostly used for material like product descriptions, such as technical writers produce, but it can be used for white papers, marketing communications, and any other information. Like eLearning. However, the elearning use is still idiosyncratic; one of the top DITA strategy consultants told me that the Learning and Training committee’s contribution has not yet been sufficient.
The important point, however, is that articulating content has real benefits. A panel of implementers mentioned reducing tool costs, reduced redundancy savings, and decreasing time to create and maintain information. There were also strategic benefits in breaking down silos and finding common ground with other groups in the organization. The opportunity to wrap quality standards around the content adds another opportunity for benefits. Server storage was another benefit. As learning groups start taking responsibility for performance support and other areas, these opportunities will be important.
And, the initial investment to start focusing on content more technically is a step along the path to start moving from web 2.0 to web 3.0; custom content generation for the learner or performer. A further step is context-sensitive customization. This is really only possible in a scalable way if you get your arms around paying tighter attention to defining content: tagging, mapping, and more.
It may seem scary, but the first steps aren’t that difficult, and it’s an investment in efficiencies initially, and into a whole new realm of capability going forward. It may not be for you tomorrow, but you have to have it on your radar.
Thinking Strategy, Pt. 2
Building on yesterday’s post, in another way of thinking about it, I’ve been trying to tap into several layers down. Like the caveat on an attempt at mind-mapping the performance ecosystem, this only begins to scratch the surface as each of these elements unpacks further, but it’s an attempt.
The plan you take (your sequence of prioritized goals), the metrics you use, and your schedule, will be individual. However, the other elements will share some characteristics.
Your governance plan should include a schedule of when the group meets, what policies guide the role of governance, what metrics the governance group uses to look at the performance of the group implementing the plan (ie how the executors of the strategy are doing, not how the strategy is doing), and what partners are included.
The strategy will need partners including fundamental ones providing necessary components (e.g. the IT group), and members who may have political reasons to be included such as power, budget, or related interests.
The resources needed will include the people, the tools, and any infrastructure elements to be counted upon.
Support capability will include supporting the team with any questions they might need answer, and also the folks for whom the strategy is for.
And there will need to be policies around what responsibility there will be for support, access to resources, and other issues that will guide how the strategy is put in place, accounting for issues like security and risk.
I’m sure I’m forgetting something, so what am I missing?