In a conversation last week (ok, an engagement), the topic of content systems came up. Now this is something I’ve argued for before, in several ways. For one, separate content from how it’s delivered. And, pull content together by rules, not hardwired. And it’s also about the right level of granularity. It’s time to revisit the message, because I thought it was too early, but I think the time is fast coming when we can look at this.
This is in opposition to the notion of pre-packaged content. MOOCs showed that folks want to drill in to what they need. Yet we still pull everything together and launch it as a final total solution. We are moving to smaller chunks (all for the better; even if it is burdened with a misleading label). But there’s more.
The first point is about content models. That we should start designing our content into smaller chunks. My heuristic is the smallest thing you’d give one person or another. My more general principle is that resolves to breaking content down by it’s learning role: a concept model is different than an example is different than a practice.
This approach emerged from an initiative on an adaptive learning system I led. It now has played out as a mechanism to support several initiatives delivering content appropriately. For one, it was supporting different business products from the same content repository. For another it was about delivering the right thing at the right time.
Which leads to the second point, about being able to pick and deliver the right thing for the context. This includes adaptive systems for learning, but also context-based performance support. With a model of the learner, the context, and the content, you can write rules that put these together to optimally identify the right thing to push.
You can go further. Think of two different representatives from the same company visiting a client. A sales person and a field engineer are going to want different things in the same location. So you can add a model of ‘role’ (though that can also be tied to the learner model).
There’s more, of course. To do this well requires content strategy, engineering, and management. Someone put it this way: strategy is what you want to deliver, engineering is how, and management is overseeing the content lifecycle.
Ultimately, it’s about moving from hardwired content to flexible delivery. And that’s possible and desirable. Moreover, it’s the future. As we see the movement from LMS to LXP, we realize that it’s about delivering just what’s needed when useful. Recognizing that LXPs are portals, not about creating experiences, we see the need for federated search.
There’s more: semantics means we can identify what things are (and are not), so we can respond to queries. With chatbot interfaces, we can make it easier to automate the search and offering to deliver the right thing to the right person at the right time.
The future is here; we see it in web interfaces all over the place. Why aren’t we seeing it yet in learning? There are strong cognitive reasons (performance support, workflow learning, self-directed and self-regulated learning). And the technology is no longer the limitation. So let’s get on it. It’s time to think content systems, not content packages.