Stephen Downes reviewed MOOCs – goals, features, wins, and room for improvement – at the EDGEX conference.
Reimagining Learning
On the way to the recent Up To All Of Us unconference (#utaou), I hadn’t planned a personal agenda. However, I was going through the diagrams that I’d created on my iPad, and discovered one that I’d frankly forgotten. Which was nice, because it allowed me to review it with fresh eyes, and it resonated. And I decided to put it out at the event to get feedback. Let me talk you through it, because I welcome your feedback too.
Up front, let me state at least part of the motivation. I’m trying to capture rethinking about education or formal learning. I’m tired of anything that allows folks to think knowledge dump and test is going to lead to meaningful change. I’m also trying to ‘think out loud’ for myself. And start getting more concrete about learning experience design.
Let me start with the second row from the top. I want to start thinking about a learning experience as a series of activities, not a progression of content. These can be a rich suite of things: engagement with a simulation, a group project, a museum visit, an interview, anything you might choose for an individual to engage in to further their learning. And, yes, it can include traditional things: e.g. read this chapter.
This, by the way, has a direct relation to Project Tin Can, a proposal to supersede SCORM, allowing a greater variety of activities: Actor – Verb – Object, or I – did – this. (For all I can recall, the origin of the diagram may have been an attempt to place Tin Can in a broad context!)
Around these activities, there are a couple of things. For one, content is accessed on the basis of the activities, not the other way around. Also, the activities produce products, and also reflections.
For the activities to be maximally valuable, they should produce output. A sim use could produce a track of the learner’s exploration. A group project could provide a documented solution, or a concept-expression video or performance. An interview could produce an audio recording. These products are portfolio items, going forward, and assessable items. The assessment could be self, peer, or mentor.
However, in the context of ‘make your thinking visible’ (aka ‘show your work’), there should also be reflections or cognitive annotations. The underlying thinking needs to be visible for inspection. This is also part of your portfolio, and assessable. This is where, however, the opportunity to really recognize where the learner is, or is not, getting the content, and detect opportunities for assistance.
The learner is driven to content resources (audios, videos, documents, etc) by meaningful activity. This in opposition to the notion that content dump happens before meaningful action. However, prior activities can ensure that learners are prepared to engage in the new activities.
The content could be pre-chosen, or the learners could be scaffolded in choosing appropriate materials. The latter is an opportunity for meta-learning. Similarly, the choice of product could be determined, or up to learner/group choice, and again an opportunity for learning cross-project skills. Helping learners create useful reflections is valuable (I recall guiding honours students to take credit for the work they’d done; they were blind to much of the own hard work they had put in!).
When I presented this to the groups, there were several questions asked via post-its on the picture I hand-drew. Let me address them here:
What scale are you thinking about?
This unpacks. What goes into activity design is a whole separate area. And learning experience design may well play a role beneath this level. However, the granularity of the activities is at issue. I think about this at several scales, from an individual lesson plan to a full curriculum. The choice of evaluation should be competency-based, assessed by rubrics, even jointly designed ones. There is a lot of depth that is linked to this.
How does this differ from a traditional performance-based learning model?
I hadn’t heard of performance-based learning. Looking it up, there seems considerable overlap. Also with outcome-based learning, problem-based learning, or service learning, and similarly Understanding By Design. It may not be more, I haven’t yet done the side-by-side. It’s scaling it up , and arguably a different lens, and maybe more, or not. Still, I’m trying to carry it to more places, and help provide ways to think anew about instruction and formal education.
An interesting aside, for me, is that this does segue to informal learning. That is, you, as an adult, choose certain activities to continue to develop your ability in certain areas. Taking this framework provides a reference for learners to take control of their own learning, and develop their ability to be better learners. Or so I would think, if done right. Imagine the right side of the diagram moving from mentor to learner control.
How much is algorithmic?
That really depends. Let me answer that in conjunction with this other comment:
Make a convert of this type of process out of a non-tech traditional process and tell that story…
I can’t do that now, but one of the attendees suggested this sounded a lot like what she did in traditional design education. The point is that this framework is independent of technology. You could be assigning studio and classroom and community projects, and getting back write-ups, performances, and more. No digital tech involved.
There are definite ways in which technology can assist: providing tools for content search, and product and reflection generation, but this is not about technology. You could be algorithmic in choosing from a suite of activities by a set of rules governing recommendations based upon learner performance, content available, etc. You could also be algorithmic in programming some feedback around tech-traversal. But that’s definitely not where I’m going right now.
Similarly, I’m going to answer two other questions together:
How can I look at the path others take? and How can I see how I am doing?
The portfolio is really the answer. You should be getting feedback on your products, and seeing others’ feedback (within limits). This is definitely not intended to be individual, but instead hopefully it could be in a group, or at least some of the activities would be (e.g. communing on blog posts, participating in a discussion forum, etc). In a tech-mediated environment, you could see others’ (anonymized) paths, access your feedback, and see traces of other’s trajectories.
The real question is: is this formulation useful? Does it give you a new and useful way of thinking about designing learning, and supporting learning?
#LearningStyles Awareness Day review
I want to support David Kelly’s Learning Styles Awareness Day, but have written pretty much all I want to say on the matter. In short, yes, learners differ. And, as a conversation with someone reminded me, it helps for learners to look at how they learn, so as to find ways to optimize their chances for success. Yet:
There’s no psychometrically-valid learning styles assessment out there.
There’s no evidence that adapting learning to learning styles is of use.
So what to do?
Use the best learning you can (at the end of the video).
Then help learners accommodate.
Here’re my previous thoughts, developing towards a proposal for how to consider learning styles, in chronological order:
Learning Styles, Brain-Based Learning, and Daniel Willingham
My problem with learning styles really is the people flogging them without a) acknowledging the problems, and b) appropriately limiting the inferences. Sometimes it seems like playing ‘whack-a-mole’…
MOOC reflections
A recent phenomena is the MOOC, Massively Open Online Courses. I see two major manifestations: the type I have participated in briefly (mea culpa) as run by George Siemens, Stephen Downes, and co-conspirators, and the type being run by places like Stanford. Each share running large numbers of students, and laudable goals. Each also has flaws, in my mind, which illustrate some issues about education.
The Stanford model, as I understand it (and I haven’t taken one), features a rigorous curriculum of content and assessments, in technical fields like AI and programming. The goal is to ensure a high quality learning experience to anyone with sufficient technical ability and access to the Internet. Currently, the experience does support a discussion board, but otherwise the experience is, effectively, solo.
The connectivist MOOCs, on the other hand, are highly social. The learning comes from content presented by a lecturer, and then dialog via social media, where the contributions of the participants are shared. Assessment comes from participation and reflection, without explicit contextualized practice.
The downside of the latter is just that, with little direction, the courses really require effective self-learners. These courses assume that through the process, learners will develop learning skills, and the philosophical underpinning is that learning is about making the connections oneself. As was pointed out by Lisa Chamberlin and Tracy Parish in an article, this can be problematic. As of yet, I don’t think that effective self-learning skills is a safe assumption (and we do need to remedy).
The problem with the former is that learners are largely dependent on the instructors, and will end up with that understanding, that learners aren’t seeing how other learners conceptualize the information and consequently developing a richer understanding. You have to have really high quality materials, and highly targeted assessments. The success will live and die on the quality of the assessments, until the social aspect is engaged.
I was recently chided that the learning theories I subscribe to are somewhat dated, and guilty as charged; my grounding has taken a small hit by my not being solidly in the academic community of late. On the other hand, I have yet to see a theory that is as usefully integrative of cognitive and social learning theory as Cognitive Apprenticeship (and willing to be wrong), so I will continue to use (my somewhat adulterated version of) it until I am otherwise informed.
From the Cognitive Apprenticeship perspective, learners need motivating and meaningful tasks around which to organize their collective learning. I reckon more social interaction will be wrapped around the Stanford environment, and that either I’ve not experienced the formal version of the connectivist MOOCs, or learners will be expected to take on the responsibility to make it meaningful but will be scaffolded in that (if not already).
The upshot is that these are valuable initiatives from both pragmatic and principled perspectives, deepening our understanding while broadening educational reach. I look forward to seeing further developments.
At the Edge of India
A few months back, courtesy of my colleague Jay Cross, I got into discussions about the EdgeX conference, scheduled for March 12-14 in New Delhi. Titled the “Disruptive Educational Research Conference”, it certainly has intriguing aspects.
I was asked to talk about games, the topic of my first book. Owing to unfortunate circumstances (my friend and co-speaker on games had to change plans), it looks like I’ll also be talking about mobile (books two and three) which is exciting despite the circumstances.
However, what’s really exciting is the lineup of other people speaking. I’ve been a fan of George Siemens and Stephen Downes for years, and an eager but less focused follower of Dave Cormier and Alex Couros. And I’ve only met Stephen once, and am eager to meet the rest. I don’t really know the other speakers, but their positions and descriptions suggest that this is going to be a great event. Meeting new and interesting people is one of the reasons to go to a conference in the first place! And, of course, Jay will be there too.
I’ve been to India before, as one of my partners has it’s origins there, and it’s a fascinating place. Part of the conference is to look at how the latest concepts of learning play out in the Indian context, but given that it’s across K12, higher ed, and corporate, we’ll be talking principles that are across contexts.
Looking at disruptive concepts, with top thinkers, in an intriguing context, makes this an exciting opportunity, I reckon. I realize it may not make sense for many readers, but I’m hoping some will be intrigued enough to check it out, and there will be a steady stream of related materials. Already there are links from many speakers, and resources about the Indian education context. If you do go, please say hi!
Meta-mobile
As a followup to my last post, I was thinking how you would use the different modes of mobile (the Four C’s): Content, Compute, Communicate, & Capture, to support the different layers of learning.
Here I’ve made a first attempt at trying to matrix the 3 layers of learning (performance, learning, meta-learning) by the 4 C’s of mobile. It’s indicative, not exhaustive, but it helps me to try to get concrete about what you might do.
As you can see, there’s some overlap, and one questions is are there continuums between the layers. Is performance support categorically different than formal learning, or are their bridges? Is meta-learning categorically different? (I’m not sure I care too much, as long as I’m considering all!)
So, in the interest of learning and thinking ‘out loud’, I invite your feedback.
Layers of learning
As I think about slow learning and Sage at the Side, I want to think about a continuum of tech-enablement. I want to include performance support, formal learning, and meta-learning. One way to think about it is layering on support across the learning event.
As I talked about in Making Slow Learning Concrete, the idea is to have little bits of information layered on top of what you’re doing. Thus, the first level might be to have performance support, to optimize the outcome of the event.
However, a second layer, potentially wrapped before and after the event, would be to connect the essence of the performance to a learning framework. Perhaps not all events would have it, but it would connect the event: context and goals, to a learning framework. It could be a conceptual model, and certainly could be feedback.
A third layer would be a meta-learning layer. Looking at any resources used (and perhaps a different one this time than the last), some information could be provided that helped the learner understand their own learning. It could be reflection support, a map of the learner’s actions, even connecting to a learning mentor, whatever would help them look at how they learned with the purposes of improving their own learning.
With this approach, we start de-coupling learning from a particular event, and start wrapping learning around our lives. I’ve used the label ‘slow learning’, but I really believe that this will feel slower, but actually will accelerate learners to competence faster than the ineffective methods we currently are using. Lots of tuning to make an experience that feels natural and supportive, as opposed to intrusive, and some real system architecture issues, but I think this is doable, and certainly worth exploring.
Sharing Failure
I’ve earlier talked about the importance of failure in learning, and now it’s revealed that Apple’s leadership development program plays that up in a big way. There are risks in sharing, and rewards. And ways to do it better and worse.
In an article in Macrumors (obviously, an Apple info site), they detail part of Adam Lashinsky’s new Inside Apple book that reports on Apple executive development program. Steve Jobs hired a couple of biz school heavyweights to develop the program, and apparently “Wherever possible the cases shine a light on mishaps…”. They use examples from other companies, and importantly, Apple’s own missteps.
Companies that can’t learn from mistakes, their own and others’, are doomed to repeat them. In organizations where it’s not safe to share failures, where anything you say can and will be held against you, the same mistakes will keep getting made. I’ve worked with firms that have very smart people, but their culture is so aggressive that they can’t admit errors. As a consequence, the company continues to make them, and gets in it’s own way. However, you don’t want to celebrate failure, but you do want to tolerate it. What can you do?
I’ve heard a great solution. Many years ago now, at the event that led to Conner’s & Clawson’s Creating a Learning Culture, one small company shared their approach: they ring a bell not when the mistake is made, but when the lesson’s learned. They’re celebrating – and, importantly, sharing – the learning from the event. This is a beautiful idea, and a powerful opportunity to use social media when the message goes beyond a proximal group.
There’s a lot that goes on behind this, particularly in terms of having a culture where it’s safe to make mistakes Culture eats strategy for breakfast, as the saying goes.. What is a problem is making the same mistake, or dumb mistakes. How do you prevent the latter? By sharing your thinking, or thinking out loud, as you develop your planned steps.
Now, just getting people sharing isn’t necessarily sufficient. Just yesterday (as I write), Jane Bozarth pointed me towards an article in the New Yorker (at least the abstract thereof) that argues why brainstorming doesn’t work. I’ve said many times that the old adage “the room is smarter than the smartest person in the room” needs a caveat: if you manage the process right. There are empirical results that guide what works from what doesn’t, such as: having everyone think on their own first; then share; focus initially on divergence before convergence; make a culture where it’s safe, even encouraged, to have a diversity of viewpoints; etc.
No one says getting a collaborating community is easy, but like anything else, there are ways to do it, and do it right. And here too, you can learn from the mistakes of others…
Performance Architecture
I’ve been using the tag ‘learning experience design strategy’ as a way to think about not taking the same old approaches of events über ales. The fact of the matter is that we’ve quite a lot of models and resources to draw upon, and we need to rethink what we’re doing.
The problem is that it goes far beyond just a more enlightened instructional design, which of course we need. We need to think of content architectures, blends between formal and informal, contextual awareness, cross-platform delivery, and more. It involves technology systems, design processes, organizational change, and more. We also need to focus on the bigger picture.
Yet the vision driving this is, to me, truly inspiring: augmenting our performance in the moment and developing us over time in a seamless way, not in an idiosyncratic and unaligned way. And it is strategic, but I’m wondering if architecture doesn’t better capture the need for systems and processes as well as revised design.
This got triggered by an exercise I’m engaging in, thinking how to convey this. It’s something along the lines of:
The curriculum’s wrong:
- it’s not knowledge objectives, it’s skills
- it’s not current needs, it’s adapting to change
- it’s not about being smart, it’s about being wise
The pedagogy’s wrong:
- it’s not a flood, but a drip
- it’s not knowledge dump, it’s decision-making
- it’s not expert-mandated, instead it’s learner-engaging
- it’s not ‘away from work’, it’s in context
The performance model is wrong:
- it’s not all in the head, it’s distributed across tools and systems.
- it’s not all facts and skill, it’s motivation and confidence
- it’s not independent, it’s socially developed
- it’s not about doing things right, it’s about doing the right thing
The evaluation is wrong:
- it’s not seat time, it’s business outcomes
- it’s not efficiency, at least until it’s effective
- it’s not about normative-reference, it’s about criteria
So what does this look like in practice? I think it’s about a support system organized so that it recognizes what you’re trying to do, and provides possible help. On top of that, it’s about showing where the advice comes from, developing understanding as an additional light layer. Finally, on top of that, it’s about making performance visible and looking at the performance across the previous level, facilitating learning to learn. And, the underlying values are also made clear.
It doesn’t have to get all that right away. It can start with just better formal learning design, and a bit of content granularity. It certainly starts with social media involvement. And adapting the culture in the org to start developing meta-learning. But you want to have a vision of where you’re going.
And what does it take to get here? It needs a new design that starts from the performance gap and looks at root causes. The design process then onsiders what sort of experience would both achieve the end goal and the gaps in the performer equation (including both technology aids and knowledge and skill upgrades), and consider how that develops over time recognizing the capabilities of both humans and technology, with a value set that emphasis letting humans do the interesting work. It’ll also take models of content, users, context, and goals, with a content architecture and a flexible delivery model with rich pictures of what a learning experience might look like and what learning resources could be. And an implementation process that is agile, iterative, and reflective, with contextualized evaluation. At least, that sounds right to me.
Now, what sounds right to you: learning experience design strategy, performance system design, performance architecture, <your choice here>?
Failing to Learn
My colleague Harold Jarche pointed me to a post by Dave Snowden about deliberative practice, which I found interesting for a facet not part of the key article (which makes worthwhile points). Among a list of important requirements for meaningful activity that is part of effective learning (i.e. it’s not just 10K hours of practice that makes an expert, but what sort of practice has an effect), Dave cites that “at least half of … experiments should fail”. Think about that for a minute.
What that’s saying is that at least half of the money you invest in new things could be conceived of as being wasted. You might be considered a very ineffective manager if 50% of your investments don’t yield returns! Now, first of all, I’m sure you recognize that failed experiments aren’t a complete waste, as long as you learning something (“when you lose, don’t lose the lesson” as the saying goes). Still, 50% might still seem like a high failure rate. But is low risk really good?
I remember hearing a talk by a Canadian AI researcher (who’s name escapes me after all these years) who had studied the optimal ratio of success to failures in helping a system learn. Now this was particular to the learning algorithm he’d chosen, but his result was roughly that you learned fastest if you failed two-thirds of the time, or around 67% failure. Now that’d be pretty disheartening, but if you could take emotion out of the equation, e.g. made it safe to fail, would learning faster be a big enough argument to support bigger failure?
It depends on a lot: on how well you discern the lessons from failure, how well you tolerate failure, how much social scrutiny and how tolerant that public viewpoint is, but it’s interesting to contemplate what might be an optimal context for failure, and given that, what would be the fastest way to learn, and capitalize on that learning. You want your experiments to be designed in the first place to yield maximum information, but if they do, what would a valuable success rate be?
I do believe that they who adapt fastest will be the survivors. That adaptation may be subconscious, but I think conscious reflection is a valuable component. Certainly for sharing the learning, so no one else has to make the same mistakes. So are you learning just as fast as you can?