Kevin Carroll opened the TechKnowledge 18 conference with his story of triumphing over a rough beginning and the lessons he’s learned.
Clark Quinn’s Learnings about Learning
As a response to my post where I offered to ‘listen’, I’ve had several comments giving me topics, and so I thought I should respond. One asked about meta-learning (learning to learn), in the particular situation of courses with a variety of expertise levels, and getting into issues of learner responsibility. The author pointed to a presentation on learning to learn, that had a nice framework, and I thought I should elaborate.
The framework mentioned talked about three stages of expertise: apprentice, journeyman (using the traditional term, is there a move to ‘journey person’ or…?), and mastery. Within these, you watch as an apprentice, practice as a journeyman, and share as a master. Which isn’t a bad approximation of the whole ‘cognitive apprenticeship‘ approach.
The article misses some nuances, of course (and the author acknowledged this). For instance, in practice, the role of deliberate practice is important, it’s not just repetition, but the ‘right’ repetition. And my commenter brought up the role of epistemological stance, that the learners need to own their own learning.
The starting point from the comment, however, was the fact that the audiences being seen varied in background knowledge; some were relative novices, others were experienced. To me, that calls for a ‘leveling’ approach. Here, you have preparatory material that you can test out of, otherwise you go through it. This helps ensure that the audience starts the learning experience with a baseline of at least language. You don’t want to be presenting content in that valuable face-to-face time!
The details involved in making learning experiences work are many. It’s about what to teach, how, how to address audience diversity, and more. It’s about meta-learning for ourselves and our learners. That’s why I advocate learning about how we learn, the cognitive science that (should) drive how we do what we do. So, who wants to learn?
I mentioned in yesterday’s post that one thing I do in getting objectives is focus on decisions. And, simple ones will get automated; we can train AI to handle these. What will make the difference between ordinary and extraordinary organizations is the ability to make decisions in this new VUCA environment (volatile, uncertain, complex, and ambiguous). And it made me wonder how you develop the ability to make better choices.
AI can be trained in a couple of ways to answer questions and make these decisions. We can use machine learning to train a system on a historical database (watching out for bias). We can use semantic analysis to read documents and make a system that can answer questions about them. But such systems are very limited; they can’t handle questions at the periphery of the knowledge well, and fall apart at related areas. Which people are better at, if their expertise has been developed.
Now, developing this expertise isn’t straightforward. If there were simple decision trees, we could automate them as above. Instead, what works best is expert models that have been abstracted across dialog and practice. This needs to be augmented with an awareness of adjacent fields. So, for instance, for instructional design, we should have an awareness of interface design, graphic design, media production, etc. So how do we develop this?
We certainly need to develop the expert models we know play a role. But this gets circular with the above unless we find a way to break out of the predictable. I suggested one approach to this with my ‘shades of grey’ post, having groups work together to make categorization choices: is this or is this not legal. This was, however, more focused on compliance and there’s a much wider situation.
We first need to identify the situations, the relevant models, and the scope of likely variation. We can’t provide specific data (or we’d train the system on it), so we need to anticipate a spread. And we could just train that, but I want to go further.
I’d want to use such a process to choose situations, and then design group work, for the reasons I identified here. (Resourced with models and examples, of course.) We want to get learners working together to address complex problems. We want them to use their various understandings to illuminate the underlying models. If you can get productive discussion (and this needs to be designed in and facilitated), the learners’ thinking will be enriched. (And they may have folks to call on when the situations do arise ;).
Collaboration in learning is second best to collaboration in problem-solving. We should do the latter when we can, but we should do the former anyway. For better learning, and for those times when there isn’t the luxury of working with others.
I reckon this would lead to better decision-making ability. What do you think?
Listening, as I mentioned, in this case to Guy Wallace. As one of the premier promoters of evidence-based design, he responded to my question about what to post on with:
Any “How Tos†using methods, tools and techniques that you‘ve found to work in L&D and Performance Improvement.
Since I am a fan of Guy’s work, I thought I should answer! Now, obviously I don’t work in a typical L&D environment, so this list is somewhat biased. So I mentally ran through memorable projects from the past and looked for the success factors. Besides the best principles I usually advocate, here are a few tips and tricks that I’ve used over the years:
Looking at them, I see that they generally reflect my overall focus on aligning what we do with how we think, work, and learn. Your thoughts?
by Clark 6 Comments
Listening is a vital skill. It’s something that made my mother very popular, because she listened, remembered, and asked about whatever you said the next time you saw her. She cared, and it showed. I wish I was as good a listener! But it’s critical to really listen (or as some have it, not just listen, but hear).
It’s part of a skillset necessary to innovate. Innovation can be about problem-solving, and design thinking has it that it’s really about problem-finding. That is, you want to understand the real problem first. And to really understand the problem, the initial divergence, is to listen. It is listening to people, but also signals in general, what the data tells you.
And so, listening is an important part of communicating and collaborating. We need to hear what’s being said (and maybe even what’s not being said), to truly hear. And we likely will need to ask, as well. This is good, because it shows we’re paying attention. Talking is speaking and listening.
And what precipitated this discussion is that in my new column for Learning Solutions (Quinnsights ;), I asked for any questions, and there was one that will be the topic of my next article for them. And I thought that was a good principle.
So, here’s the question:
Is there anything in particular you’d like me to post about here?
As it is, I post about what I’m thinking about or working on (usually somewhat anonymously). However, I could benefit to hear what you’re thinking about. And post on it if I can. Of course, you should be posting on what you’re thinking about too (#ShowYourWork #WorkOutLoud), but hey, why not cross-communicate? As it is, I appreciate the comments I get, but this is just a way to feed my brain.
So, this is me listening. Anyone want to catch my ear?
“Conversations are the stem cells of learning.” – Jay Cross
I recently read something that intrigued me. I couldn’t find it again, so I’ll paraphrase the message. As context, the author was talking about how someone with a different world view was opining about the views of the author. And his simple message was “if you want to know what I, or an X, thinks, ask me or an X. Don’t ask the anti-X.” And I think that’s important. We need to talk together to figure things out. We have to get out of our comfort zone.
It’s all too evident that we seem to be getting more divisive. And it’s too easy these days to only see stuff that you agree with. You can choose to only follow channels that are simpatico with your beliefs, and even supposedly unbiased platforms actually filter what you see to keep you happy. Yet, the real way to advance, to learn, is to see opposing sides and work to find a viable resolution.
Innovation depends on creative tension, and we need to continue to innovate. So we need to continue to engage. Indeed, my colleague Harold Jarche points to the book Collaborating with the Enemy and argues that’s a good thing. The point is that when things are really tough, we have to go beyond our boundaries. And life is getting more complex.
So I keep connections with a few people who don’t think like me, and I try to understand the things that they say. I don’t want to listen just to those who think like me, I recognize that I need to understand their viewpoints if we’re going to make progress. Of course, I can’t guarantee reciprocity, but I can recognize that’s not my problem.
And I read what academic research has to say. I prefer peer-review to opinion, although I keep an open mind as to the problems with academic research as well. I have published enough, and reviewed many submissions, so I recognize the challenges. Yet it’s better than the alternative ;).
This is, however, the way we have to be as professionals. We have to understand other viewpoints. It matters to our world, but even in the small little worlds we inhabit professionally. We need to talk. And face to face. It matters, it turns out. Which may not be a surprise. Still, getting together with colleagues, attending events, and talking, even disagreeing (civilly) are all necessary.
So please, talk. Engage. Let’s figure stuff out and make things better. Please.
Given my reflections on the past year, it’s worth thinking about the implications. What trajectories can we expect if the trends are extended? These are not predictions (as has been said, “never predict anything, particularly the future”). Instead, these are musings, and perhaps wishes for what could (even should) occur.
I mentioned an interest in AR and VR. I think these are definitely on the upswing. VR may be on a rebound from some early hype (certainly ‘virtual worlds’), but AR is still in the offing. And the tools are becoming more usable and affordable, which typically presages uptake.
I think the excitement about AI will continue, but I reckon we’re already seeing a bit of a backlash. I think that’s fair enough. And I’m seeing more talk about Intelligence Augmentation, and I think that’s a perspective we continue to need. Informed, of course, by a true understanding of how we think, work, and learn. We need to design to work with us. Effectively.
Fortunately, I think there are signs we might see more rationality in L&D overall. Certainly we’re seeing lots of people talking about the need for improvement. I see more interest in evaluation, which is also a good step. In fact, I believe it’s a good first step!
I hope it goes further, of course. The cognitive perspective suggests everything from training & performance support, through facilitating communication and collaboration, to culture. There are many facets that can be fine-tuned to optimize outcomes.Similarly, I hope to see a continuing improvement in learning engineering. That’s part of the reason for the Manifesto and the Quinnov 8. How it emerges, however, is less important than that it does. Our learners, and our organizations, deserve nothing less.
Thus, the integration of cognitive science into the design of performance and innovation solutions will continue to be my theme. When you’re ready to take steps in this direction, I’m happy to help. Let me know; that’s what I do!
The end of the calendar year, although arbitrary, becomes a time for reflection. I looked back at my calendar to see what I’d done this past year, and it was an interesting review. Places I’ve been and things I’ve done point to some common themes. Such are the nature of reflections.
One of the things I did was speak at a number of events. My messages have been pretty consistent along two core themes: doing learning better, and going beyond the course. These were both presented at TK17 that started the year, and were reiterated, one or the other, through other ATD and Guild events.
With one exception. For my final ATD event of the year, I spoke on Artificial Intelligence (AI). It was in China, and they’re going big into AI. It’s been a recurrent interest of mine since I was an undergraduate. I’ve been fortunate to experience some seminal moments in the field, and even dabble. The interest in AI does not seem to be abating.
Another persistent area of interest has been Augmented Reality (AR) and Virtual Reality (VR). I attended an event focused on Realities, and I continue to believe in the learning potential of these approaches. Contextual learning, whether building fake or leveraging real, is a necessary adjunct to our learning. One AR post of mine even won an award!
My work continues to be both organizational learning, but also higher education. Interestingly, I spoke to an academic audience about the realities of workplace learning! I also had a strategic engagement with a higher education institution on improving elearning.
I also worked on a couple of projects. One I mentioned last week, a course on better ID. I’m still proud of the eLearning Manifesto (as you can see in the sidebar ;). And I continue to want to help people do better using technology to facilitate learning. I think the Quinnov 8 are a good way.
All in all, I still believe that pursuing better and broader learning and performance is a worthwhile endeavor. Technology is a lovely complement to our thinking, but we have to do it with an understanding of how our brains work. My last project from the year is along these lines, but it’s not yet ready to be announced. Stay tuned!
Ok, so I told you the story of the video course I was creating on what I call the Quinnov 8, and now I’ll point to it. It’s available through Udemy, and I’ve tried to keep the price low. With their usual discounts, it should be darn near free ;). Certainly no more than a few cups of coffee.
It’s about an hour of video of me talking, with a few diagrams and text placeholders. I’ve included quizzes for each of the content sections. Also, I have assignments to go away and apply the principles to your own work. Finally, I created a page or several for each section showing some ideas, models, and more.
I do not recommend going through it in one run. I can’t control it, but as I mention in the course, you want to space it out. We know that that leads to better outcomes. Instead, I recommend spacing it out a section a week or so perhaps, and doing the work and coming back to reactivate before moving on.
The content is organized around what I’m terming the Quinnov 8, the eight elements I think are core to making the step to better elearning design. While the ideal is to push to a robust iterative and prototyping model, I’m focusing mostly on the small steps that will give you the greatest leverage. The elements are:
I’m trying to go deep, that is to unpack the levels of cognitive depth to explain how the Quinnov 8 elements work. I’ve identified the challenges I’ve faced, and I may well update it over time, but it’s at a stage I think I can at least give you the chance to explore. I welcome your feedback, but I reckon this is one way you can further your understanding on a significant budget.
by Clark 4 Comments
I’m using a standard for organizational learning quality in the process of another task. Why or for whom doesn’t matter. What does matter is that there are two problems in their standard that indicate we still haven’t overcome some pernicious problems. And we need to!
So, for the first one, this is in their standard for developing learning solutions:
Uses blended models that appeal to a variety of learning styles.
Do you see the problem here? Learning styles are debunked! There’s no meaningful and valid instrument to measure them, and no evidence that adapting to them is of use. Appealing to them is a waste of time and effort. Design for the learning instead! Yet here we’re seeing someone conveying legitimacy by communicating this message.
The second one is also problematic, in their standard for evaluation:
Reports typical L&D metrics such as Kirkpatrick levels, experimental models, pre- and post-tests and utility analyses.
This one’s a little harder to see. If you think about it, however, you should see that pre- and post-test measures aren’t good measures. What you’re measuring here is a delta, and the problem is, you would expect a delta. It doesn’t really tell you anything. You shouldn’t have even bothered if the performance isn’t up to scratch! What you want to do is confirm that you’re achieving a higher level of performance set objectively. Are they now able to perform? Or how many are? Doing the pre-post is like doing normative reference (e.g. grading on a curve) when you should be doing criteria-referenced performance.
And this is from an organization that’s purports to communicate L&D quality! These are both from their base level of operation, which means it’s acceptable. This is evidence that our problems aren’t just in practice, they’re pernicious; they’re present in the mindset of even the supposed experts. Is it any wonder the industry is having trouble? And I haven’t rigorously reviewed the standard, I was merely using it (I wonder what I’d find if I did?).
Maybe I’m being too harsh. Maybe the wording doesn’t imply what I think it does. But I’ll suggest that we need a bit more rigor, a bit more attention to science in what we do. What have I missed?