Maxwell Planck gave the afternoon keynote for the opening day of DevLearn. He talked about the trajectory of VR, with very interesting reflections on creativity, story, and meaning.
Penn Jillette #DevLearn Keynote Mindmap
Collaboration, Communication, and Cooperation
In thinking about the Coherent Organization, the original proposal from my colleague Harold Jarche was that were two key attitudes: collaboration and cooperation. And I find myself talking about collaboration and communication. It’s time to try to reconcile those, and propose why I think collaboration is a new business watchword.
So, Harold argues that there are two key ways of working, collaborating and cooperating. To him, collaboration is when you’re committed to a goal to achieve, whether involuntarily or voluntarily. Cooperation, for him, is when you have the willingness to continue to contribute on an ongoing basis: putting out your own work, commenting on others, and answering questions. And he suggests that cooperation is the more important, as it’s more voluntary. And I agree that it’s likely that needs will drive collaboration, and cooperation comes from within (and in a safe environment). I think he’s talking about personal commitment, and rightly so.
So why do I talk about communication and collaboration? Because the vehicle for cooperation is communication, and so we not only need the impetus to contribute, but the skills. He’s talking about creating an effective network, and I’m talking about getting the job done. He’s nurturing a culture, and I’m about developing practices. Which are both needed and mutually reinforcing, and so I think we’re agreeing furiously.
And as I write this, my own thinking is changing. I do believe collaboration is what’s going to get things done for organizations in the short term, but I think there are two notions of collaboration. One is the traditional form of a team working on a project. However, there’s another approach that takes the longer term view. Here, it’s about people keeping a casual eye on what’s going on and serendipitous sparks fan flames. That does require cooperation, of course.
I’ve recently been reading Stephen Johnson’s Where Good Ideas Come From, and Keith Sawyer’s Group Genius (both recommended), and it’s clear that true innovation is about getting people to work together over time. Real innovation percolates, suffers mistakes, and can’t be forced or planned. While I think progress can be made by teams working on specific needs, the change in my thinking is realizing that the longer term process of real innovation requires continual contribution in networks. What Sawyer terms ‘collaboration webs’. And this will require cooperation.
As an aside, there are still big opportunities for collaboration tools. On a recent #lrnchat, a colleague shared how she was collaborating on presentations using Google Slides. And I’ve done much important work with others using Google Docs and Sheets. And tools exist for diagramming, and white boarding, and more. Still, the tools feel embryonic. I want voice and text live as well as comments. I want to have flexible representations mixed in, so I can be working on numbers and diagrams and text in one doc (a brief eulogy here for the fabulous program Trapeze that had a revolutionary document model decades ago).
While collaboration may get the immediate focus and the ink inches (I guess pixels these days :) – because of new tools, and the immediate business benefits – I think the longer term need will be to create an environment where the culture, the practices, and the tools are aligned for successful learning. I think there’re reasons to focus on both, but the important thing is to recognize the differences and get both right. Amy Edmondson, in her book on organizational agility Teaming, suggested using the term ‘learning’ instead of innovation, as it focused on longer term and made it safer. So perhaps I’ll talk about organizational learning for the long term, and use collaboration for the short term work. What do you think?
Extending engagements
In a couple of recent posts, I’ve been telling tales of helping organizations, and I wanted to tell at least one more. In this case, I’m extending the type of work I’ve done to have a real impact, still with a low overhead. The key is to include some followup activities.
Serious eLearning
In one instance, a person who’d attended my game design workshop wanted to put it into practice. With a colleague, they were wanting to improve their online learning to better support their stakeholders, and wanted to deepen the experience. The goal was to provide their learners opportunities to practice success skills.
We knew they were were going to be developing scenarios, so the key was the develop the skills of these two. Consequently, we arranged a series of meetings where they’d deliver their latest work, and I’d not only critique it, but use it as opportunities to deepen their understanding. This occurred over a period of a couple of months, on calls for an hour or so. Each call would occur a short time after they delivered their latest version.
It took several iterations, but their outputs improved substantially. When we were comfortable with their progress, the engagement was over.
Learning Strategy
In another instance, a company was moving to a ‘customer experience’ focus, and wanted to workshop what this meant for the training function. They had already planned on using a particular process that involves a team of stakeholders on a week-long meeting, and in particular that process called for one outsider (yours truly). Beforehand, I got up to speed on their business and current status.
During that week, I found my role to continue to advocate for taking a bigger picture of meeting customer learning and performance needs. They found it easy to slip back into thinking of courses, but continued to ‘get’ that they should look at augmenting their work with performance support. Given that their product was complex, it became clear that ‘how to’ videos were a real opportunity.. They were particularly excited about the concept of ‘spacing’ practice, and loved the spacing diagram originated by my colleague Will Thalheimer.
What’s more important is that we also built in several ongoing reviews. So, their process had a few subsequent deliverables, and we worked out that they would come through me for feedback. In general, new ideas can backslide if not reinforced, and this process helped them cement in several new features, including a new emphasis on the videos.
The point being, extending engagements with a few simple followups provides a much higher likelihood of improvement than just a one-off. It doesn’t take much, and the outcome is better. It is a spaced practice, really, and we know that works better. I reckon the marginal extra investment yields a much bigger benefit. Does that make sense to you?
Demoing Out Loud (#wolweek and #DevLearn)
Demoing is a form of working out loud, right? So I recently was involved in a project with Learnnovators where we designed some demo elearning (on the workplace of the future), and documented the thinking behind it. (The posts, published by Learning Solutions, are aggregated here.) And now there’s be a chance to see it! So, a couple of things to note.
First, this is Work Out Loud Week, and you should be seeing considerable attention to working out loud (aka Show Your Work). On principle, this is a good practice (and part of the Workplace of the Future, to be recursive). I strongly recommend you have an eye out for events and posts that emerge. There’s an official site for Work Out Loud week: Wolweek.com, and a twitter account: @Wolweek, and the hashtag #wolweek, so lots of ways to see what’s up. There are many benefits that accrue, not least because you need to create a culture where this practice can live long and prosper. Once it does, you see more awareness of activity, improved outcomes, and more.
Second, if you’ll be at DevLearn next week, I’ll be demoing the resulting course at the DemoFest (table 84). Come by and share your thoughts and/or find out what the goal was, the tradeoffs faced, and the resulting decisions made. Of course, I encourage you to attend my workshop on elearning strategy and mythbusting session as well. I’ll also be haunting the xAPI camp on the Tuesday. Hope to see you there!
Strategy Sessions
In a previous post, I talked about a couple of instances where I worked with folks to let them ‘pick my brain’. Those were about learning design in particular, but I’ve also worked with folks on strategy. In these strategy sessions, things work a little differently.
So, in a typical strategy session, I prepare by looking at what they’ve done beforehand: any existing strategy documents. I also look at their current context, e.g. their business, market, customers, and products/services. Finally, I look at their stated goals. I also explore their stated needs, and see if there are some they may be missing. Then we get together.
I typically will spend an hour or so going over some principles, so we have a shared framework to discuss against. Then we brainstorm possible actions. We’ve prepped for this, circulating the space for topics, so people have had time to identify their individual ideas. We get them documented, diverging before converging. This may be a relatively large group, with representative stakeholders, but not so large that it can’t be managed.
Then, typically, a smaller group will take those ideas and prioritize them. To be clear, it’s informed by the context and infrastructure, so that the steps don’t just go from easier to harder, but it’s also about choosing steps that are strategic in securing credibility, building capacity, and leveraging other initiatives. At the end, however, the team I’m working with has both a general roadmap and a specific plan.
And I think this is good. They’ve gotten some new and valuable ways to think about strategy, and custom advice, all in a very short engagement. Sometimes it’s happened under the rubric of a mobile strategy, sometime’s it’s more general, but it always open eyes. In two particular instances, I recall that the outcomes they ended up focusing on most weren’t even on their radar when they started!
This is another instance of how folks can get high benefit from a small engagement. Picking my brain can be valuable, but it’s not a fair engagement unless we make it mutually rewarding. That’s not so hard to do, however. Just so you know.
Measuring Culture Change
Someone recently asked how you would go about measuring culture change, and I thought it’s an interesting question. I’ll think ‘out loud’ about what might be the possibilities. A learning culture is optimal for organizational innovation and agility, and it’s likely that not all elements are already in place. So it’s plausible that you’d want to change, and if you do, you’d like to know how it’s going.
I think there are two major categories of measures: direct and indirect. Direct measures are ones that are impacting the outcomes you’re looking for, and indirect ones are steps along the way. Say, for instance, one desirable outcome of a learning culture would be, well, learning! In this case, I mean the broad sense of learning: problems solved, new designs generated, research answering questions. And indirect would be activity likely to yield that outcome. It could be engagement, or social interaction, or… If we think of it in a Kirkpatrickian sense, we want to generate the indirect activity, and then measure the actual business impact.
What direct measures might there be? I can see time to solve customer problems or problems solved per time. And/or I might look at the rate of research questions answered. Or the rate of new product generation. Of course, if you were expecting other outcomes from your culture initiative, you’d naturally want aligned methods. You could just be concerned with employee engagement, but I’m somewhat inclined (and willing to be wrong) to think about what the outcome of increased engagement would be. It could also be retention or recruitment, if those are your goals.
These latter – engagement, recruitment, retention – are also possible indirect measures. They indicate that things are better. Another indirect but more targeted measure might be the amount of collaboration happening (e.g. the use of collaboration tools) or even activity in social networks. Those have been touted as the benefits of building community in social media, and those are worthwhile as well.
As a process, I think about what I might do before, during, and after any culture change initiative. I’d probably want a baseline to begin with, and then regular (if not continual) assessments as we go. I’d take small steps, perhaps in one unit to begin, and monitor the impact, tuning as I go along. Culture change is a journey, not an event, after all ;).
So ok, that’s off the top of my head, what say you?
Pick my brain?
It’s a continual bane of a consultant’s existence that there are people who want to ‘pick your brain’. It’s really asking for free consulting, and as such, it’s insulting. If you google the phrase, you’ll see how many people have indicated their problems with this! However, there are quite legitimate ways to pick my brain and I thought I’d mention a couple. In both cases, I think were great engagements on both sides, high value for a reasonable investment.
Both in this case were for folks who develop content. In one case a not-for-profit, the other in the higher-ed space. One had heard me speak about learning design, and one had heard about a workshop I’d given, but both contacted me. It is clear they realized that there’s value to them for having a scrutable learning design.
Content Review
So for the first one, they wanted some feedback on their design, and we arranged that I’d investigate a representative sample and provide feedback. I went through systematically, taking notes, and compiled my observations into a report I sent them. This didn’t take any investment in travel, but of course this feedback only points out what’s wrong, and doesn’t really provide mechanisms to improve.
I think they were surprised at the outcome, as the feedback was fairly robust. They had a good design, largely, under the constraints, but there were some systematic design problems. There were also some places where they’d managed to have some errors that had passed editorial (and this was only a small sample of a replicated model across a broad curriculum). To be fair, some of my complaints came from situations that were appropriate given some aspect of their context that I hadn’t known, but there were still a set of specific improvements I could recommend:
“We found his comments insightful, and we look forward to implementing his expert suggestions to further improve of our product…“
Learning Design Workshop
In this case, they’d heard about a workshop that I’d run on behalf of a client, and were interested in getting a similar experience. They had been designing content and had a great ability to track the results of their design and tweak, but really wanted a grounding in the underlying learning science. I did review some sample content, but I also traveled to their site for a day and presented learning science details and workshopped the implications to their design process.
I went through details such as:
- the importance and format for objectives,
- SME limitations and tips how to work with them,
- what makes effective practice,
- the role and characteristics of concepts,
- the details behind examples,
- introduction and the role of emotions in the learning experience,
- and more.
We went through examples of their content, and workshopped how they could adjust their design processes in pragmatic ways to instill the important details into their approach. We also talked about ways to followup to not lose the momentum, but it was clear that this first visit was viewed favorable:
“…a walking encyclopedia of learning science… was able to respond to our inquiries with one well-researched perspective after another”.
So, there are ways to pick my brain that provide high value with mutual benefit on each side. Sure, you can read my blog or books, but sometimes you may want assistance in contextualizing it to your situation. I encourage you to think of making an investment in quality. These are about learning design, but I have some examples in strategy that I intend to share soon. And more. Stay tuned for more ‘adventures in consulting’ tales that talk about ways in which a variety of needs are met. Maybe one will resonate with you. Of course, they’ll be mixed in with the regular reflections you’ve come to expect.
Reconciling Activity and Decisions
In preparing to work with a client on developing their learning science understanding, I realized that I was using two representations about meaningful learner interaction that could be seen to be conflicting. On the one hand I talk about using decisions as a basis for design, and on the other I refer to activity-based learning. And I have separate diagrams for each. What was necessary was reconciling activity and decisions.
So first, I talk about how we should be putting learners in the place to make decisions like they’ll need to be making after the learning experience. We need to put them in a context, and then a particular event triggers the need for a decision. And then there are options for actions to take. From the design point of view, there are correct answers, and wrong answers. These wrong answers, of course, should reflect where learners go wrong, reflecting reliable misconceptions. People don’t make errors randomly, mostly, but instead reflect inappropriate models being brought to bear. And after their choices, there are consequences. I like for those consequences to be represented first, before the external feedback comes in. This is just a better multiple choice question (or other canned interaction), but…
If the consequences of the initial decision lead to a new situation and new decisions, now we’re talking a full scenario (whether implemented via branching or a full simulation-driven experience). Note that this is also the structure of a game. In fact, this emerged from game designer Sid Maier’s quote about how games are a series of interesting decisions. Hence, serious games are a series of interesting and important decisions! And, of course, this is programmed in advance (if we’re not talking about online role playing), so learners get feedback without necessary human intervention (though there’re powerful benefits to collecting discussion around the learning challenge).
However, I also have characterized learning as a series of activities, and those activities generate some work product and are (ideally) annotated with reflections. These products can (and arguably should be) cast as a response to some storyline that has them in a role related to the ones they’re likely to be in after the learning experience (even with some exaggeration). These are complex outputs that are unlikely to be aut0-marked, and can be the basis of either or both of peer or mentor review.
The benefits here are that when we make the work product reflect real practice, we’re developing a suite of outcomes beyond just the content. We can require different formats – presentations, spreadsheets, documents – developing modeling and communication skills. We can require group work, developing interpersonal skills. And we’re developing time management and project management skills as well. The tradeoff is the amount of mentoring time.
The challenge, then, is to identify the differences, and then think about when you’d use each. The obvious difference is the simpler structure for decisions. While a branching scenario or programmed simulation/game is more than one decision, it’s still more linear than creating a product. Developing a product is typically a series of many decisions! Hence the difficultly for auto-marking, but also the power for learning. It depends on the learning outcome you need, of course. Now, too many activities in a short period of time could tax instructor time, so the best answer (as in many things) is to have a blend.
That’s my reconciliation of activity and decisions. Does it make sense to you? What did I miss?
Self-regulation & PKM
I’m a fan of Harold Jarche’s Seek-Sense-Share (SSS) model for Personal Knowledge Mastery (PKM). I was also reading about self-regulated learning, and a proposed model for that. And I realized they could be related. Naturally, I created a diagram.
To start with, Harold’s model is oriented around coping with the information flow as a component of learning. He starts with seek, which could be either from a pre-arranged feed or the result of a specific search. Then, the information is processed, by either or both of representation or active experimentation. Finally, information is shared, either broadcast through some form of post, or sent to a specific target. Note that the interpretations within the SSS boxes, e.g. feed and post, are mine, as I haven’t checked them with him.
Now, the model of self-regulated learning I was reading about talks about personal goals, learning actions, and evaluation. It seems to me that learning goals sit outside of SSS, the SSS serves as the actions, and then evaluation comes after the action. Specifically, the goals inform the choice of feeds and any search, as well as the context for interpretation. Similarly, the results of personal sensing and the feedback from sharing inform the evaluation. And of course, the evaluation feeds new goals.
Two additional things. First, the encompassing notion is that this is under continual review. That is you’re taking time to think about how you set goals, act (SSS), and evaluate. Also, let me note that I think this makes sense both at the individual and organizational level. That is, organizations need to be explicit about their knowledge, experiments, and learnings.
The outside loop is likely to be an implicit part of PKM as well, but as indicated I haven’t had a chance to discuss it with Harold. However, it’s useful for me to represent it this way as an experiment (see what I did there?). The question is, does this make sense for you?