George Siemens kicked off the EDGEX conference with a broad reaching and insightful review of the changes in higher education.
Reimagining Learning
On the way to the recent Up To All Of Us unconference (#utaou), I hadn’t planned a personal agenda. However, I was going through the diagrams that I’d created on my iPad, and discovered one that I’d frankly forgotten. Which was nice, because it allowed me to review it with fresh eyes, and it resonated. And I decided to put it out at the event to get feedback. Let me talk you through it, because I welcome your feedback too.
Up front, let me state at least part of the motivation. I’m trying to capture rethinking about education or formal learning. I’m tired of anything that allows folks to think knowledge dump and test is going to lead to meaningful change. I’m also trying to ‘think out loud’ for myself. And start getting more concrete about learning experience design.
Let me start with the second row from the top. I want to start thinking about a learning experience as a series of activities, not a progression of content. These can be a rich suite of things: engagement with a simulation, a group project, a museum visit, an interview, anything you might choose for an individual to engage in to further their learning. And, yes, it can include traditional things: e.g. read this chapter.
This, by the way, has a direct relation to Project Tin Can, a proposal to supersede SCORM, allowing a greater variety of activities: Actor – Verb – Object, or I – did – this. (For all I can recall, the origin of the diagram may have been an attempt to place Tin Can in a broad context!)
Around these activities, there are a couple of things. For one, content is accessed on the basis of the activities, not the other way around. Also, the activities produce products, and also reflections.
For the activities to be maximally valuable, they should produce output. A sim use could produce a track of the learner’s exploration. A group project could provide a documented solution, or a concept-expression video or performance. An interview could produce an audio recording. These products are portfolio items, going forward, and assessable items. The assessment could be self, peer, or mentor.
However, in the context of ‘make your thinking visible’ (aka ‘show your work’), there should also be reflections or cognitive annotations. The underlying thinking needs to be visible for inspection. This is also part of your portfolio, and assessable. This is where, however, the opportunity to really recognize where the learner is, or is not, getting the content, and detect opportunities for assistance.
The learner is driven to content resources (audios, videos, documents, etc) by meaningful activity. This in opposition to the notion that content dump happens before meaningful action. However, prior activities can ensure that learners are prepared to engage in the new activities.
The content could be pre-chosen, or the learners could be scaffolded in choosing appropriate materials. The latter is an opportunity for meta-learning. Similarly, the choice of product could be determined, or up to learner/group choice, and again an opportunity for learning cross-project skills. Helping learners create useful reflections is valuable (I recall guiding honours students to take credit for the work they’d done; they were blind to much of the own hard work they had put in!).
When I presented this to the groups, there were several questions asked via post-its on the picture I hand-drew. Let me address them here:
What scale are you thinking about?
This unpacks. What goes into activity design is a whole separate area. And learning experience design may well play a role beneath this level. However, the granularity of the activities is at issue. I think about this at several scales, from an individual lesson plan to a full curriculum. The choice of evaluation should be competency-based, assessed by rubrics, even jointly designed ones. There is a lot of depth that is linked to this.
How does this differ from a traditional performance-based learning model?
I hadn’t heard of performance-based learning. Looking it up, there seems considerable overlap. Also with outcome-based learning, problem-based learning, or service learning, and similarly Understanding By Design. It may not be more, I haven’t yet done the side-by-side. It’s scaling it up , and arguably a different lens, and maybe more, or not. Still, I’m trying to carry it to more places, and help provide ways to think anew about instruction and formal education.
An interesting aside, for me, is that this does segue to informal learning. That is, you, as an adult, choose certain activities to continue to develop your ability in certain areas. Taking this framework provides a reference for learners to take control of their own learning, and develop their ability to be better learners. Or so I would think, if done right. Imagine the right side of the diagram moving from mentor to learner control.
How much is algorithmic?
That really depends. Let me answer that in conjunction with this other comment:
Make a convert of this type of process out of a non-tech traditional process and tell that story…
I can’t do that now, but one of the attendees suggested this sounded a lot like what she did in traditional design education. The point is that this framework is independent of technology. You could be assigning studio and classroom and community projects, and getting back write-ups, performances, and more. No digital tech involved.
There are definite ways in which technology can assist: providing tools for content search, and product and reflection generation, but this is not about technology. You could be algorithmic in choosing from a suite of activities by a set of rules governing recommendations based upon learner performance, content available, etc. You could also be algorithmic in programming some feedback around tech-traversal. But that’s definitely not where I’m going right now.
Similarly, I’m going to answer two other questions together:
How can I look at the path others take? and How can I see how I am doing?
The portfolio is really the answer. You should be getting feedback on your products, and seeing others’ feedback (within limits). This is definitely not intended to be individual, but instead hopefully it could be in a group, or at least some of the activities would be (e.g. communing on blog posts, participating in a discussion forum, etc). In a tech-mediated environment, you could see others’ (anonymized) paths, access your feedback, and see traces of other’s trajectories.
The real question is: is this formulation useful? Does it give you a new and useful way of thinking about designing learning, and supporting learning?
70:20:10 Tech
At the recent Up To All Of Us event (#utaou), someone asked about the 70:20:10 model. As you might expect, I mentioned that it’s a framework for thinking about supporting people at work, but it also occurred to me that there might be a reason folks have not addressed the 90, because, in the past, there might have been little that they could do. But that’s changed.
In the past, other than courses, there was little at could be done except providing courses on how to coach, and making job aids. The technology wasn’t advanced enough. But that’s changed.
What has changed are several things. One is the rise of social networking tools: blogs, micro-blogs, wikis, and more. The other is the rise of mobile. Together, we can be supporting the 90 in fairly rich ways.
For the 20, coaching and mentoring, we can start delivering that wherever needed, via mobile. Learners can ask for, or even be provided, support more closely tied to their performance situations regardless of location. We can also have a richer suite of coaching and mentoring happening through Communities of Practice, where anyone can be a coach or mentor, and be developed in those roles, too. Learner activity can be tracked, as well, leaving traces for later review.
For the 70, we can first of all start providing rich job aids wherever and whenever, including a suite of troubleshooting information and even interactive wizards. We also can have help on tap freed of barriers of time and distance. We can look up information as well, if our portals are well-designed. And we can find people to help, whether information or collaboration.
The point is that we no longer have limits in the support we can provide, so we should stop having limits in the help we *do* provide.
Yes, other reasons could still also be that folks in the L&D unit know how to do courses, so that’s their hammer making everything look like a nail, or they don’t see it as their responsibility (to which I respond “Who else? Are you going to leave it to IT? Operations?”). That *has* to change. We can, and should, do more. Are you?
#LearningStyles Awareness Day review
I want to support David Kelly’s Learning Styles Awareness Day, but have written pretty much all I want to say on the matter. In short, yes, learners differ. And, as a conversation with someone reminded me, it helps for learners to look at how they learn, so as to find ways to optimize their chances for success. Yet:
There’s no psychometrically-valid learning styles assessment out there.
There’s no evidence that adapting learning to learning styles is of use.
So what to do?
Use the best learning you can (at the end of the video).
Then help learners accommodate.
Here’re my previous thoughts, developing towards a proposal for how to consider learning styles, in chronological order:
Learning Styles, Brain-Based Learning, and Daniel Willingham
My problem with learning styles really is the people flogging them without a) acknowledging the problems, and b) appropriately limiting the inferences. Sometimes it seems like playing ‘whack-a-mole’…
MOOC reflections
A recent phenomena is the MOOC, Massively Open Online Courses. I see two major manifestations: the type I have participated in briefly (mea culpa) as run by George Siemens, Stephen Downes, and co-conspirators, and the type being run by places like Stanford. Each share running large numbers of students, and laudable goals. Each also has flaws, in my mind, which illustrate some issues about education.
The Stanford model, as I understand it (and I haven’t taken one), features a rigorous curriculum of content and assessments, in technical fields like AI and programming. The goal is to ensure a high quality learning experience to anyone with sufficient technical ability and access to the Internet. Currently, the experience does support a discussion board, but otherwise the experience is, effectively, solo.
The connectivist MOOCs, on the other hand, are highly social. The learning comes from content presented by a lecturer, and then dialog via social media, where the contributions of the participants are shared. Assessment comes from participation and reflection, without explicit contextualized practice.
The downside of the latter is just that, with little direction, the courses really require effective self-learners. These courses assume that through the process, learners will develop learning skills, and the philosophical underpinning is that learning is about making the connections oneself. As was pointed out by Lisa Chamberlin and Tracy Parish in an article, this can be problematic. As of yet, I don’t think that effective self-learning skills is a safe assumption (and we do need to remedy).
The problem with the former is that learners are largely dependent on the instructors, and will end up with that understanding, that learners aren’t seeing how other learners conceptualize the information and consequently developing a richer understanding. You have to have really high quality materials, and highly targeted assessments. The success will live and die on the quality of the assessments, until the social aspect is engaged.
I was recently chided that the learning theories I subscribe to are somewhat dated, and guilty as charged; my grounding has taken a small hit by my not being solidly in the academic community of late. On the other hand, I have yet to see a theory that is as usefully integrative of cognitive and social learning theory as Cognitive Apprenticeship (and willing to be wrong), so I will continue to use (my somewhat adulterated version of) it until I am otherwise informed.
From the Cognitive Apprenticeship perspective, learners need motivating and meaningful tasks around which to organize their collective learning. I reckon more social interaction will be wrapped around the Stanford environment, and that either I’ve not experienced the formal version of the connectivist MOOCs, or learners will be expected to take on the responsibility to make it meaningful but will be scaffolded in that (if not already).
The upshot is that these are valuable initiatives from both pragmatic and principled perspectives, deepening our understanding while broadening educational reach. I look forward to seeing further developments.
UTAOU Sunday mindmap
UTAOU Saturday Mindmap
Making it visible and viral
On a recent client engagement, the issue was spreading an important initiative through the organization. The challenges were numerous: getting consistent uptake across management and leadership, aligning across organizational units, and making the initiative seem important and yet also doable in a concrete way. Pockets of success were seen, and these are of interest.
For one, the particular unit had focused on making the initiative viral, and consequently had selected and trained appropriate representatives dispersed through their organization. These individuals were supported and empowered to incite change wherever appropriate. And they were seeing initial signs of success. The lesson here is that top down is not always sufficient, and that benevolent infiltration is a valuable addition.
The other involvement was also social, in that the approach was to make the outcomes of the initiative visible. In addition to mantras, graphs depicting status were placed in prominent places, showing current status. Further, suggestions for improvement were not only solicited, but made visible and their status tracked. Again, indicators were positive on these moves.
The point is that change is hard, and a variety of mechanisms may be appropriate. You need to understand not just what formal mechanisms you have, but also how people actually work. I think that too often, planning fails to anticipate the effects of inertia, ambivalence, and apathy. More emotional emphasis is needed, more direct connection to individual outcomes, and more digestion into manageable chunks. This is true for elearning, learning, and change.
In looking at attitude change, and from experience, I recognize that even if folks are committed to change, it can be easy to fall back into old habits without ongoing support. Confusion in message, lack of emotional appeal, and idiosyncratic leadership only reduce the likelihood. If it’s important, get alignment and sweat the details. If it’s not, why bother?
At the Edge of India
A few months back, courtesy of my colleague Jay Cross, I got into discussions about the EdgeX conference, scheduled for March 12-14 in New Delhi. Titled the “Disruptive Educational Research Conference”, it certainly has intriguing aspects.
I was asked to talk about games, the topic of my first book. Owing to unfortunate circumstances (my friend and co-speaker on games had to change plans), it looks like I’ll also be talking about mobile (books two and three) which is exciting despite the circumstances.
However, what’s really exciting is the lineup of other people speaking. I’ve been a fan of George Siemens and Stephen Downes for years, and an eager but less focused follower of Dave Cormier and Alex Couros. And I’ve only met Stephen once, and am eager to meet the rest. I don’t really know the other speakers, but their positions and descriptions suggest that this is going to be a great event. Meeting new and interesting people is one of the reasons to go to a conference in the first place! And, of course, Jay will be there too.
I’ve been to India before, as one of my partners has it’s origins there, and it’s a fascinating place. Part of the conference is to look at how the latest concepts of learning play out in the Indian context, but given that it’s across K12, higher ed, and corporate, we’ll be talking principles that are across contexts.
Looking at disruptive concepts, with top thinkers, in an intriguing context, makes this an exciting opportunity, I reckon. I realize it may not make sense for many readers, but I’m hoping some will be intrigued enough to check it out, and there will be a steady stream of related materials. Already there are links from many speakers, and resources about the Indian education context. If you do go, please say hi!
Social media budget line item?
Where does social media fit in the organization? In talking with a social media entrepreneur over beers the other day, he mentioned that one of his barriers in dealing with organizations was that they didn’t have a budget line for social media software.
That may sound trivial, but it’s actually a real issue in terms of freeing up the organization. In one instance, it had been the R&D organization that undertook the cost. In another case, the cost was attributed to the overhead incurred in dealing with a merger. These are expedient, but wrong.
It’s increasingly obvious that it’s more than just a ‘nice to have’. As I’ve mentioned previously, innovation is the only true differentiator. If that’s the case, then social media is critical. Why? Because the myth of individual innovation is busted, as clearly told by folks like Keith Sawyer and Steven Berlin Johnson. So, if it’s not individual, it’s social, and that means we need to facilitate conversations.
If we want people to be able to work together to create new innovations, we don’t want to leave it to chance. In addition to useful architectural efforts that facilitate in person interactions, we want to put in place the mechanisms to interact without barriers of time or distance. Which means, we need a social media system.
It’s pretty clear that if you align things appropriately: culture, vision, tools, that you get better outcomes. And, of course, culture isn’t a line item, and vision’s a leadership mandate. But tools, well, they are a product/service, and need resources.
Which brings us to the initial point: where does this responsibility lie? Despite my desire for folks who are most likely to understand facilitating learning (though that’s sadly unlikely in too many L& D departments), it could be IT, operations, or as mentioned above, R&D. The point is, this is arguably one of the most important investments in the organization, and typically not one of the most expensive (making it the best deal going!). Yet there’s not a unified obvious home!
There are worries if it’s IT. They are, or should be, great at maintaining network uptime, but don’t really understand learning. Nor do the other groups, and yet facilitating the discussion in the network is the most important external role. But who funds it?
Let’s be real; no one wants to have to own the cost when there’re other things they’re already doing. But I’d argue that it’s the best investment an L&D organization could make, as it will likely have the biggest impact on the organization. Well, if you really are looking to move needles on key business metrics. So, where do you think it could, and should reside?