Learnlets
Clark Quinn's Learnings about Learning
(The Official Quinnovation blog)

27 April 2016

Moving forward

Clark @ 8:14 am

A few weeks ago, I posted about laying out activities in a space dividing the execution side from the innovation side, and in the head from in the world.  None of you took the bait about talking what it meant (I’m so disappointed), but it continued to ponder it myself. And at least one idea came to mind.

LearningSpaceImplicationsSo what I’m thinking is that the point is to not be using our heads to be doing simple execution. Machines (read: robots or computation agents) are very good at doing what they’re told. Reliably, and repeatably.  They may need oversight, but in many ways we’re seeing this play out.

What we should be doing is trying to automate execution. We aren’t good at doing rote things, and having us do them is silly.  Ideally you automate them, or outsource them in some way.  Let’s save our minds for doing important work.

Of course, many times the situations we’re increasingly seeing are not matters of simply executing. As things get more ambiguous, more novel, more chaotic, we’re really discovering we need to have people handle those situations in innovative ways. So they’re really being moved over regardless.

And, of course, we want that innovation to be fueled by data, information in the world being made available to support making these decisions. Big analytics, or even little analytics are good basis, as are models and support tools to facilitate the processes.  And, of course, this doesn’t have to be all in one head, but drawing upon teams, communities, and networks to get solution.

The real point is to let machines do what they can do well, and leave to us what we do well. And, what we want to be responsible for.  As I see it, the role of technology is to augment us, not replace us.  It’s up to us to make the choices, but we have the opportunity to work in ways that align with how our brains really think, work, and learn.  I reckon that choice is a no-brainer ;).

6 April 2016

A complex look at task assignments

Clark @ 8:09 am

I was thinking (one morning at 4AM, when I was wishing I was asleep) about designing assignment structures that matched my activity-based learning model.  And a model emerged that I managed to recall when I finally did get up.  I’ve been workshopping it a bit since, tuning some details. No claim that it’s there yet, by the way.

ModelAssignmentAnd I’ll be the first to acknowledge that it’s complex, as the diagram represents, but let me tease it apart for you and see if it makes sense. I’m trying to integrate meaningful tasks, meta-learning, and collaboration.  And there are remaining issues, but let’s get to the model first.

So, it starts by assigning the learners a task to create an artefact. (Spelling intended to convey that it’s not a typical artifact, but instead a created object for learning purposes.) It could be a presentation, a video, a document, or what have you.  The learner is also supposed to annotate their rationale for the resulting design as well.  And, at least initially, there’s a guide to principles for creating an artefact of this type.  There could even be a model presentation.

The instructor then reviews these outputs, and assigns the student several others to review.  Here it’s represented as 2 others, but it could be 4. The point is that the group size is the constraining factor.

And, again at least initially, there’s a rubric for evaluating the artefacts to support the learner. There could even be a video of a model evaluation. The learner writes reviews of the two artefacts, and annotates the underlying thinking that accompanies and emerges.  And the instructor reviews the reviews, and provides feedback.

Then, the learner joins with other learners to create a joint output, intended to be better than each individual submission.  Initially, at least, the learners will likely be grouped with others that are similar.  This step might seem counter intuitive, but while ultimately the assignments will be to widely different artefacts, initially the assignment is lighter to allow time to come to grips with the actual process of collaborating (again with a guide, at least initially). Finally, the final artefacts are evaluated, perhaps even shared with all.

Several points to make about this.  As indicated, the support is gradually faded. While another task might use another artefact, so the guides and rubrics will change, the working together guide can gradually first get to higher and higher levels (e.g. starting with “everyone contributes to the plan”, and ultimately getting to “look to ensure that all are being heard”) and gradually being removed. And the assignment to different groups goes from alike to as widely disparate as possible. And the tasks should eventually get back to the same type of artefact, developing those 21 C skills about different representations and ways of working.  The model is designed more for a long-term learning experience than a one-off event model (which we should be avoiding anyways).

The artefacts and the notes are evidence for the instructor to look at the learner’s understanding and find a basis to understand not only their domain knowledge (and gaps), but also their understanding of the 21st Century Skills (e.g. the artefact-creation process, and working and researching and…), and their learning-to-learn skills. Moreover, if collaborative tools are used for the co-generation of the final artefact, there are traces of the contribution of each learner to serve as further evidence.

Of course, this could continue. If it’s a complex artefact (such as a product design, not just a presentation), there could be several revisions.  This is just a core structure.  And note that this is  not for every assignment. This is a major project around or in conjunction with other, smaller, things like formative assessment of component skills and presentation of models may occur.

What emerges is that the learners are learning about the meta-cognitive aspects of artefact design, through the guides. They are also meta-learning in their reflections (which may also be scaffolded). And, of course, the overall approach is designed to get the valuable cognitive processing necessary to learning.

There are some unresolved issues here.  For one, it could appear to be heavy load on the instructor. It’s essentially impossible to auto-mark the artefacts, though the peer review could remove some of the load, requiring only oversight. For another, it’s hard to fit into a particular time-frame. So, for instance, this could take more than a week if you give a few days for each section.  Finally, there’s the issue of assessing individual understanding.

I think this represents an integration of a wide spread of desirable features in a learning experience. It’s a model to shoot for, though it’s likely that not all elements will initially be integrated. And, as yet, there’s no LMS that’s going to track the artefact creation across courses and support all aspects of this.  It’s a first draft, and I welcome feedback!

 

30 March 2016

Socially Acceptable

Clark @ 8:07 am

I was talking with my ITA colleagues, and we were discussing the state of awareness of social learning. And we were somewhat concerned that at least from some evidence, there’re some misconceptions around about social learning. So I thought I’d take another shot at it.

First, let me make the case why it’s important. There are number of  reasons to be interested in social learning:

  • it’s more natural: our learning mechanisms were social before they were formal
  • it’s deeper learning: the processing that goes on through knowledge negotiation leads to more flexible and longer learning
  • it’s about innovation too: with problem-solving, trouble-shooting, research, design, etc, you don’t know the answer before you begin, so it’s learning, and the outcomes are better when done socially

This is only a start, but I reckon if those don’t make the case that you should be taking a serious look at incorporating social business into your organization, you are not really concerned.

Then, let’s clarify what it’s not. Social learning is:

  • not about (just) formal: as suggested above, social extends from formal out to informal to being an essential part of how business gets done.
  • not about social media: social media is  a tool to support social learning, but it’s not the focus
  • not a discussion forum available during a course: you need people interacting around artifacts – posts, pages, videos, etc – to generate meaningful outcomes
  • not about getting people together to discuss a problem without proper preparation

So what is good social learning?  Good social learning is driving interaction around work (whether real or designed for learning). Good social learning is:

  • communicating by pointing to relevant new information
  • curating resources, not just for yourself but also for others
  • being transparent about what you’re doing (and why), showing your work
  • discussing different ways of getting something done
  • collaborating to develop a shared response
  • tapping into the power of people
  • developing a shared understanding of how to work and play well together, and using it

At core, it’s really about performing better.  And that should be your focus, no?  So, are you ready to get real about social learning?

23 March 2016

Activity-Based Learning

Clark @ 8:12 am

On a recent conversation with some Up to All of Us colleagues, I was reminded about my ‘reimagining learning‘ model. The conversation was about fractals and learning, and how most tools (e.g. the LMS) don’t reflect the conversational nature of learning.  And I was thinking again about how we need to shift our thinking, and how we can reframe it.

I’d pointed one colleague to Diana Laurillard’s model of conversational learning, as it does reflect a more iterative model of learning with ongoing cycle of action and reflection. And it occurred to me that I hadn’t conveyed what the learner’s experience with the activity curriculum would look like. It’s implicit, but not explicit.

New Learning CycleOf course, it’s a series of activities (as opposed to a series of content), but it’s about the product of those activities.  The learner (alone or together) creates a response to a challenge, perhaps accessing relevant content as part of the process, and additionally annotates the thinking behind it.

This is then viewed by peers and/or a mentor, who provide feedback to the learner. As a nuance, there should be guidance for that feedback, so that it explicitly represent the concept(s) that should guide the performance. The subsequent activity could be to revise the product, or move along to something else.

The point being that the learner is engaged in a meaningful assignment (the activity should be contextualized), and actively reflecting. The subsequent activity, as the Laurillard model suggests, should reflect what the learner’s actions have demonstrated.

It’s very much the social cognition benefits I’ve talked about before, in creating and then getting feedback on that representation.  The learner’s creating and reflecting, and that provides a rich basis for understanding where they are at.

Again, my purpose here is to help make it clear that a curriculum properly should be about doing, not knowing.  And this is why I believe that there must  be people in the loop. And while much of that burden might be placed on the other learners (if you have a synchronous cohort model), or even the learner with guidance on generating their own feedback, with rubrics for evaluation, but you still benefit from oversight in case the understanding gets off track.

We can do a lot to improve asynchronous learning, but we should not neglect social when we can take advantage of it. So, are you wanting to improve your learning?

22 March 2016

Aligning with us

Clark @ 8:05 am

One of the realizations I had in writing the Revolutionize L&D book was how badly we’re out of synch with our brains. I think alignment is a big thing, both from the Coherent Organization perspective of having our flows of information aligned, and in processes that help us move forward in ways that reflect our humanity.

In short, I believe we’re out of alignment with our views on how we think, work, and learn.  The old folklore that represents the thinking that still permeates L&D today is based upon outdated models. And we really have to understand these differences if we’re to get better.

AligningThe mistaken belief about thinking is that it’s all done in our head. That is, we keep the knowledge up there, and then when a context comes in we internalize it and make a logical decision and then we act.  And what cognitive science says is that this isn’t really the way it works.  First, our thinking isn’t all in our heads. We distribute it across representational tools like spreadsheets,  documents, and (yes) diagrams.  And we don’t make logical decisions without a lot of support or expertise. Instead, we make quick decisions.  This means that we should be looking at tools to support thinking, not just trying to put it all in the head. We should be putting as much in the world as we can, and look to scaffold our processes as well.

It’s also this notion that we go away and come up with the answer, and that the individual productivity is what matters.  It turns out that most innovation, problem-solving, etc, gets better results if we do it together.  As I often say “the room is smarter than the smartest person in the room if you manage the process right“.  Yet, we don’t.  And people work better when they understand why what they’re doing is important and they care about it. We should be looking at ways to get people to work together more and better, but instead we still see hierarchical decision making, restrictive cultures, and more.

And, of course, there still persists this model that information dump and knowledge test will lead to new capabilities.  That’s a low probability approach. Whereas if you’re serious about learning, you know it’s mostly about spacing contextualized application of that knowledge to solve problems. Instead, we see rapid elearning tools and templates that tart-up quiz questions.

The point being, we aren’t recognizing that which makes us special, and augmenting in ways that bring out the best.  We’re really running organizations that aren’t designed for humans.  Most of the robotic work should and will get automated, so then when we need to find ways to use people to do the things they’re best at. It should be the learning folks, and if they’re not ready, well, they better figure it out or be left out!  So let’s get a jump on it, shall we?

24 February 2016

When to gamify?

Clark @ 8:10 am

I’ve had lurking in my ‘to do’ list a comment about doing a post on when to gamify. In general, of course, I avoid it, but I have to acknowledge there are times when it makes sense.  And someone challenged me to think about what those circumstances are. So here I’m taking a principled shot at it, but I also welcome your thoughts.

To be clear, let me first define what gamification is to me.  So, I’m a big fan of serious games, that is when you wrap meaningful decisions into contexts that are intrinsically meaningful.  And I can be convinced that there are times when tarting up memory practice with quiz-show window-dressing makes sense, e.g. when it has to be ‘in the head’.  What I typically refer to as gamification, however, is where you use external resources, such as scores, leaderboards, badges, and rewards to support behavior you want to happen.

I happened to hear a gamification expert talk, and he pointed out some rules about what he termed ‘goal science’.  He had five pillars:

  1. that clear goals makes people feel connected and aligns the organization
  2. that working on goals together (in a competitive sense ;) makes them feel supported
  3. that feedback helps people progress in systematic ways
  4. that the tight loop of feedback is more personalized
  5. that choosing challenging goals engages people

Implicit in this is that you do good goal setting and rewards. You have to have some good alignment to get these points across.  He made the point that doing it badly could be worse than not doing it at all!

With these ground rules, we can think about when it might make sense.  I’ll argue that one obvious, and probably sad case, would be when you don’t have a coherent organization, and people aren’t aware of their role in the organization.  Making up for effective communication isn’t necessarily a good thing, in my mind.

I think it also might make sense for a fun diversion to achieve a short-term goal. This might be particularly useful for an organizational change, when extra motivation could be of assistance in supporting new behaviors. (Say, for moving to a coherent organization. ;) Or some periodic event, supporting say a philanthropic commitment related to the organization.

And it can be a reward for a desired behavior, such as my frequent flier points.  I collect them, hoping to spend them. I resent it, a bit, because it’s never as good as is promised, which is a worry.  Which means it’s not being done well.

On the other hand, I can’t see using it on an ongoing basis, as it seems it would undermine the intrinsic motivation of doing meaningful work.  Making up for a lack of meaningful work would be a bad thing, too.

So, I recall talking to a guy many moons ago who was an expert in motivation for the workplace. And I had the opportunity to see the staggering amount of stuff available to orgs to reward behavior (largely sales) at an exhibit happening next to our event. It’s clear I’m not an expert, but while I’ll stick to my guns about preferring intrinsic motivation, I’m quite willing to believe that there are times it works, including on me.

Ok, those are my thoughts, what’ve I missed?

16 February 2016

Litmos Guest Blog Series

Clark @ 8:09 am

As I did with Learnnovators, with Litmos I’ve also done a series of posts, in this case a year’s worth.  Unlike the other series, which was focused on deeper eLearning design, they’re not linked thematically and instead cover a wide range of topics that were mutually agreed as being personally interesting and of interest to their argument.

So, we have presentations on:

  1. Blending learning
  2. Performance Support
  3. mLearning: Part 1 and Part 2
  4. Advanced Instructional Design
  5. Games and Gamification
  6. Courses in the Ecosystem
  7. L&D and the Bigger Picture
  8. Measurement
  9. Reviewing Design Processes
  10. New Learning Technologies
  11. Collaboration
  12. Meta-Learning

If any of these topics are of interest, I welcome you to check them out.

 

9 February 2016

Social Training?

Clark @ 8:16 am

Sparked by the sight of a post about ‘social training’, I jokingly asked my ITA colleagues whether they could train me to be social.  And, of course, they’ve posted about it.  And it made me think a little bit more too.

Jane talks about being asked “how you make people learn socially”, and mentions that you can’t force people to be social.  That’s the point, you can’t make people engage.  Particularly if it’s not safe to share. She goes on and says it’s got to be “relevant, purposeful and appealing”, and what you do is provide the environment and conditions.

Harold riffs off of Jane’s post, and points out that shifting an organization to a more social way of working takes management’s commitment and work from both above and below.  He lists a number of activities he’s engaged in to try to develop success in several initiatives.  His point being that it’s not just org change, you need to adopt a new mindset about responsibility and work towards an effective culture.

I’ve talked in the past about the environmental elements and the skills required.  There are multiple areas that can be addressed, but it’s not to make people learn socially.  You need the right culture, the technology infrastructure, meaningful work, and the skills.  And these aren’t independent, but intrinsically interlinked.

You likely need to start small, working outward. You need to start with meaningful work, make sure that it’s safe to work together, develop the ability to use social tools to accomplish the work, and develop the skills about working together. Don’t take those for granted!  Then, you can lather-rinse-repeat (don’t get me started on the impact of that last word), spreading both to other work projects and up to community.

You’ll want to be strategic about the choice of tools, and the message. It’s not about the tools, and there are replacements for every tool, it’s about the functions they serve.  While you want to use the software already in play, you also want to not lock their abilities to one suite of tools in case you want to switch.

And, of course, you need to facilitate the interactions as well. Help people ask for help, and to offer help, and about how to provide feedback, and…

As well, you need to manage the messaging around it.  Help people see the upsides, help support the transition (both with plans to address the expected problems and a team ready to work on any unexpected ones), etc.  It is organizational change, but it’s also culture change.  It takes a plan to scale up.

So, joking aside, it’s not about social training (though learning can be social), but instead about creating a learning organization that brings out the best outcomes from and for the employees. As another discussion posited, you don’t get the best customer experience unless you have a good employee experience.  So, are you creating the best?

5 February 2016

Leverage points for organizational agility

Clark @ 8:19 am

I received some feedback on my post on Organizational Knowledge Mastery.  The claim was that if you trusted to human sensing, you’d be only able to track what’s become common knowledge, and that doesn’t provide the necessary competitive advantage. The extension was that you needed market analytics to see new trends. And it caused me to think a little deeper.

I’m thinking that the individuals in the organization, in their sensing/sharing, are tracking things before it becomes common knowledge. If people are actively practicing  ongoing sensemaking and sharing internally and finding resonance, that can develop understanding before it becomes common knowledge.  They’ve expertise in the area, and so that shared sense making should precede what emerges as common knowledge.  Another way to think about it is to ask where the knowledge comes from that ​becomes the common knowledge?

And I’m thinking that market analytics aren’t going to find the new, because by definition no one knows that to look for yet.  Or at least part of the new.  To put it another way, the qualitative (e.g. semantic) changes aren’t going to be as visible to machine sensing as to human (Watson notwithstanding).  The emerging reality is human-machine hybrids are more powerful than either alone, but each alone finds different things.  So there were things in protein-folding that machines found, but other things that humans playing protein-folding games found.   I have no problem with market data also, but I definitely think that the organization benefits to the extent that it supports human sense-making as well.  Different insights from different mechanisms.

And I also think a culture for agility comes from a different ‘space’ than does a rabid focus on numerics.  A mindset that accommodates both is needed.  I don’t think they’re incommensurate.  I’m kind of suspicious of dual operating systems versus a podular approach, as I suspect that the hierarchical activities will be automated and/or outsourced, but I’m willing to suspend my criticism until shown otherwise.

So, still pondering this, and welcome your feedback.

2 February 2016

Organizational Knowledge Mastery?

Clark @ 8:05 am

I was pointed to a report from MIT Sloan Management talking about how big data was critical to shorten ‘time to insight’. And I think that’s a ‘good thing’ in the sense that knowing what’s happening faster is clearly going to be part of agility.  But I  must be missing something, because unless I’m mistaken, big data can’t give you the type of insights you really need.

Ok, I get it. By the ‘test and learn’ process of doing experiments and reading reactions, you can gather data quickly. And I’m all for this.  But this is largely internal, and I think the insights needed are external. And yes, the experiments can be outside the firewall, trying new things with customers and visitors and reading reactions, but that’s still in the realms of the understood or expected. How can such a process detect the disruptive influences?

Years ago, with friend and colleague Eileen Clegg, we wrote a chapter based upon her biologist husband’s work in extremophiles, looking for insight into how to survive in tough times.  We made analogies from a number of the biological phenomena, and one was the need to be more integrated with the environment, sensing changes and bringing them in. Which of course, triggered an association.

If we adapt Harold Jarche’s Personal Knowledge Mastery (or PKM), which is about Seek-Sense-Share as a mechanism to grow our own abilities, to organizations, we can see a different model.  Perhaps an OKM?  Here’s organizations seek knowledge sources, sense via experiments and reflection, and share internally (and externally, as appropriate ;).

This is partly at the core of the Coherent Organization model as well, where communities are seeking and sharing outside as ways to continue to evolve and feed the teams whose work is driving the organization forward. It’s about flows of information, which can’t happen if you’re in a Miranda Organization. And so while big data is a powerful tool, I think there’s something more required.

I think the practices and the culture of the organization are more important.  If you don’t have those right, big data won’t give big insights, and if you do, big data is just one of your tools.  Even if you’re doing experiments, it might be small data, carefully instrumented experiments targeted at getting specific outcomes, rather than big data, that will give you what you need.  But more importantly, sensing what’s going on outside, having diverse interests and a culture of curiosity is going to be the driver for the unexpected opportunities.

So yes, use the tools to hand and leverage the power of technology, but focus on motivations and culture so that the tools will be used in the important ways.  At least that was my reaction.  What’s yours?

Next Page »

Powered by WordPress