Learnlets
Clark Quinn's Learnings about Learning
(The Official Quinnovation blog)

8 November 2017

Marcy Driscoll AECT Keynote Mindmap

Clark @ 10:48 AM

Marcy Driscoll kicked off the Association for Educational Communications and Technology’s annual conference with a thoughtful keynote on leadership. She used her experience as a Dean to explore possibilities and suggestions for what this could and should mean.

Mindmap

7 November 2017

Revisiting 70:20:10

Clark @ 8:03 AM

Last week, the Debunker Club (led by Will Thalheimer) held a twitter debate on 70:20:10 (the tweet stream can be downloaded if you’re curious).  In ‘attendance’ were two of the major proponents of 70:20:10, Charles Jennings and Jos Arets.  I joined Will as a moderator, but he did the heavy lifting of organizing the event and queueing up questions.  And there were some insights from the conversations and my own reflections.

Learning curveTo start, 70:20:10 is a framework, it’s not a specific ratio but a guide to thinking about the whole picture of developing organizational solutions to performance problems. In the book by Jos & Charles, along with their colleague Vivian Heijnen, on the topic, there’s a whole methodology that encompasses 5 roles and 28 steps. The approach goes from a problem to a solution that incorporates tools, formal learning, coaching, and more.

The numbers come from a study on leaders, who felt that 10% of what they learned to do their jobs came from formal learning, 20% came from working with others and coaching, and 70% they learned from trying and reflecting on the outcomes. The framework’s role is to help people recognize this, and not leave the 70 and 20 to chance. The goal is to help people along the learning curve, not just leave them to chance after the ‘event’.

First, my impression was that a lot of people like that the 70:20:10 framework provides a push beyond the event model of ‘the course’. Also, a number struggle with the numbers as a brand, because they feel that the numbers are misleading. And some folks clearly believe that good instructional design should include the social and the activity, so the framework is a distraction. A colleague felt that there were also some who feel that formal learning is a waste of time, but I don’t think that many truly ignore the 10, they just want it in the proper perspective (and I could be wrong).

MoreFormalNow, there are times when the ratio changes. In roles where the consequences of failure are drastic (read: aerospace, medical, military), you tend to have a lot more formal.  It can go quite a ways up the learning curve. Ideally, we’d do this for every situation, but in real life we have to strike a balance. If we can do the job right in the 10, and then similarly ensure good practices around the 20 and the 70, we’ll get people up the curve.

Another issue, for me, is that 70:20:10 not only provides a push towards thinking of the whole picture, but like Kirkpatrick (and perhaps better) it serves as a design tool. You should start from what the situation looks like at the end and figure out what can be in the world and what has to be in the head, and then go backwards. You then design your tools, and then your training, and 70:20:10 suggests including coaching, etc.  But starting with the 70 is one of the messages.

So, I like the realization of 70:20:10 (except typing all those redundant zeros and colons, I often refer to it as 721 ;): the focus on designing the full solution, including tools and coaching and more.  I don’t see 70:20:10 being the full solution, as the element of continual innovation and a learning culture are separate, but it’s a good solution for the performance part of the picture, and the specific parts of the development.

2 November 2017

Rules for AI

Clark @ 8:02 AM

After my presentation in Shanghai on AI for L&D, there were a number of conversations that ensued, and led to some reflections. I’m boiling them down here to a few rules that seem to make sense going forward.

  1. Don’t worry about AI overlords. At least, not yet ;).  Rodney Brooks wrote a really nice article talking about why we might be fearing AI, and why we shouldn’t. In it, he cited Amara’s Law: we tend to overestimate technology in the short-term, and underestimate the impact in the long term. I think we’re in the short-term of AI, and while it’s easy to extrapolate from smart behavior in a limited domain to similar behavior in another (and sensible for humans), it turns out to be hard to get computers to do so.
  2. Do be concerned about how AI is being used. AI can be used for ill or good, and we should be concerned about the human impact.  I realize that a focus on short-term returns might suggest replacing people when possible. And anything rote enough possibly should be replaced, since it’s a sad use of human ability.  Still, there are strong reasons to consider the impact on the people being affected, not least humanitarian, but also practical. Which leads to:
  3. Don’t have AI without human oversight (at least in most cases).  As stated above in 1, AI doesn’t generalize well.  While it can be trained to work within the scope you describe, it will suffer at the boundary conditions, and any ambiguous or unique situations. It may well make a better judgment in those cases, but it also may not. In most cases, it will be best to have an external review process for all decisions being made, or at least ones at the periphery. Because:
  4. Your AI is only as good as it’s data set and/or it’s algorithms. Much of machine learning essentially runs on historical datasets. And historical datasets can have historical biases in them.  For instance, if you were to look at building a career counselor based upon what’s been done in many examples across schools, you might find that women were being steered away from math-intensive careers. Similarly, if you’re using a mismatched algorithm (as happens often in statistics, for example), you could be biasing your results.
  5. Design as if AI means Augmented Intelligence, not Artificial Intelligence (perhaps an extension of 3). There are things humans do well, and things that computers do well. AI is an attempt to address the intersection, but if our goal is (as it should be) to get the best outcome, it’s likely to be a hybrid of the two. Yes, automate what can and should be automated, but first consider what the best total solution would be, and then if it’s ok to just use the AI do so. But don’t assume so.
  6. AI on top of a bad system is a bad system. This is, perhaps, a corollary to 4, but it goes further. So, for instance, if you create a really intriguing simulated avatar for practicing soft skills, but you’re still not really providing a good model to guide performance, and good examples, you’re either requiring considerable more practice or risking an inappropriate emergent model.  AI is not a panacea, but instead a tool in designing solutions (see 5).  If the rest of the system has flaws, so will the resulting solution.

This is by no means a full set, nor a completely independent one. But it does reflect some principles that emerged from my interactions around some applications and discussions with people. I welcome your extensions, amendments, or even contrary views!

25 October 2017

Addressing Changes

Clark @ 8:03 AM

Yesterday, I listed some of the major changes that L&D needs to acknowledge. What we need now is to look at the top steps that need to be taken.  As serious practitioners in a potentially valuable field, we need to adapt to the changing environment as much as we need to assist our charges to do so. So what’s involved?

We need to get a grasp on technology affordances. We don’t need to that the latest technology exists, whether AI, AR, or VR.  Instead, we have to understand what they mean in the context of our brains.  What key capabilities are brought?  Can VR go beyond entertainment to help us learn better? How can AI partner with us?  If we can make practical use of AR, what would we do with it?

In conjunction, we need to understand the realities about us.  We need to take ownership and have a suitable background in how people really think, work, and learn. Further, we need to recognize that they’re all tied together, not separate things. So, for instance, we learn as we work, we think as we learn, etc.

For example, we need to understand situated and distributed cognition. That is, we need to grasp that we’re not formal logical thinkers, but instead very context dependent, and that our thinking is across our tools. As a consequence, we need to design solutions that recognize our individual situations, and leverage technology as an augment. So we want to design human/computer system solutions to problems, not just human or system solutions.

We also need to understand cultural elements. We work better when we are given meaningful work, freedom to pursue those goals, and get the necessary support to succeed. This is not micromanagement, but instead, is leadership and coaching. We also need an environment where it’s safe, expected even, to experiment and even to make mistakes.

We also need to understand that we work better (read: produce better results), when we work together in particular ways. Where we understand that we should allow individual thought first, but then pool those ideas. And we need to show our work and the underlying thinking. Moreover, again, it has to be safe to do so!

And, these are all tied together into a systemic approach!  It can’t be piecemeal, because working together and out loud can’t be divorced from the technology used to enable these capabilities. And giving people meaningful work and not letting them work together, or vice-versa, just won’t achieve the necessary critical mass.

Finally, we also need to do this in alignment with the business. And, lets be clear, in ways that can be measured!  We need to be understanding what are the critical performance needs of the organization, and demonstrate that we’re impacting them in the ways above.

This can be done, and it will be the hallmark of successful organization. We’re already seeing a wide variety of converging evidence that these changes lead to success. The question is, are you going to lead your organization forward into the future, or keep your head down and do what you’ve always done?

24 October 2017

Acknowledging Changes

Clark @ 8:08 AM

There are a serious number of changes that are affecting organizations.  We’re seeing changes in the information flow, in technology, and in what we know about ourselves. Importantly, these are things that L&D needs to acknowledge and respond to.  What are these changes?

It’s old news that things are happening faster. We’re being overwhelmed with information, and that rate is accelerating. On the other hand, our tools to manage the information flow are also advancing.

Which is the second topic. We’re getting more powerful technology. We can create systems that do tasks that used to be limited to humans. They can also partner with us, providing information based upon who we are, what we’re doing, and what else is going on.

And there are increasing demands for accountability (and transparency). Your actions should be justified. What are you doing, why, and what effect is it having? If you can’t answer these questions, you’re going to be looking for a job.

Most importantly, we’ve learned quite a bit about ourselves that is contrary to many pre-existing beliefs. Specifically ones that influence organizational approaches.  Our myths about how we think, work, and learn are holding us back from achieving optimal outcomes.

For one, there’s a persistent belief that our thinking is in our heads.  Yet research shows that our thinking is distributed across our tools. We use external representations to capture at least part of our thinking, and access information that we can’t keep in our heads effectively.  Yet we seem to depend on courses to put it in the head instead of tools to put it in the world.

Our thinking is also distributed across others. “You’re no longer what you know, but who you know” is a new mantra. So is “the room is smarter than the smartest person in the room” (with the caveat: if you manage the process right ;). Informal and social learning is the work. Yet we still act as if we believe that people should solve problems independently.

And we also act as if how we learn is by information dump.  Add a quiz, so we know they can recognize the right answer if they see it, and they’ve learned!  Er, no. Science tells us that this is perhaps the worst thing we could do to facilitate learning.

In short, our practices are out of date. We’re using patch-it (or ignore-it) solutions to systemic issues.  We address simple things as if they’re not all connected. It’s time to get on top of what’s known, and then act accordingly.  Are you ready to join the 21st century?

18 October 2017

Stay Curious

Clark @ 8:09 AM

One of my ongoing recommendations to people grew out of a toss-off line, playing off an advertisement. Someone asked about a strategy for continuing to learn (if memory serves), and I quipped “stay curious, my friends”.  However, as I ponder it, I think more and more that such an approach is key.

I was thinking of this trend the the other day as “intellectual restlessness”. What I’m talking about is being intrigued by things you don’t understand that have persisted or recently crossed your awareness, and pursuing it. It’s not just saying, how interesting, but recognizing connections, and pondering how it could change what you do. Even to the point of actually changing!

It also would include pointing interesting things to other people who would benefit.  This doesn’t always have to happen, but in the spirit of cooperation (in the Jarche sense), we could and should contribute, curate, when we can.  And, ideally, leaving trails of your explorations that others can benefit from. Writings, diagrams, videos, what have you, helps yourself as well as others.

Old Infoworld magazinesI was reminiscing that more than 30 years ago, on top of my job designing educational computer games, I was already curious. I still have copies of the magazines containing reviews I did (one hardware, one software), as well as a journal article based upon undergraduate research I was fortunate to participate in.

And that persistence in curiosity has led to a trail of artefacts. You may have come across the books, book chapters, articles, presentations, etc. And, of course, this blog for the past decade and more. (May it continue!) However, I’m not here to tout my wares, but instead to point to the benefit of being curious.

As things change faster, a continuing interest is what provides an ongoing ability to adapt. All the news about the ongoing changes in jobs and work isn’t likely to lessen.  Staying curious benefits you, your colleagues and friends, and I reckon society in general.  You want to look at many sources of information, track tangential fields, and be open to new ideas.

This isn’t just your choice, of course, ideally your organization is supportive. These lateral inputs are a component of innovation, as is time to allow for serendipity and incubation. Orgs that want to be able to be agile will need this capabilities as well. I suppose organizations need to stay curious as well!

 

11 October 2017

Radical Coherency

Clark @ 8:07 AM

Tied to my last post about insufficient approaches, I was thinking again about the Coherent Organization . Coherency is powerful, but it could be a limiting metaphor.  So I want to explore it a bit further.

First, coherency is powerful.  Lasers, for example, are just light, the same as comes from your lightbulbs. Except that the wavelengths are aligned and focused. When they’re at the same frequency, in the same direction, suddenly you can cut steel!

However, an easy interpretation is that you get this right, and it’s then sufficient. But that’s no longer sufficient in organizations. As things change, you need coherency and agility. How do you get both?

I’m suggesting that coherency has to be on many dimensions.  So you have coherency with the organization’s purpose, but people are coherent with each other, and with the customers, and with best principles.  And that latter is important, as best practices won’t transfer unless they’re abstracted and recontextualized.

So what I’m arguing for is a more radical coherency, a coherency that’s in synchrony in an ecosystem perspective. Where people are communicating and collaborating in ways that apply best principles in an way that integrates them into an aligned whole that’s greater than the sum of the parts.

This is a learning organization, but one that’s integrating many disparate elements. That, I think, is a desirable and achievable goal, but it’s more than one program. It’s a campaign that needs an initial focus, and a plan to successfully integrate it into practice first, and then to scale it to both shift practice and culture. It’s non-trivial, but I think it’s more than worthwhile: it’s necessary. What do you think?

10 October 2017

Simple Insufficiency

Clark @ 8:01 AM

As things get more complex, organizations are looking to get more agile. And they’re looking at a wide variety of approaches in different areas. It can be agile, digital transformation, design thinking, and more. And, by and large, these are all good things. And all of them are quite simply insufficient. Why do I suggest this insufficiency? Because the solution is complex.

Organizations are complex organisms. If you try to address them with simple solutions, you will perturb them, but the results will not be as expected. Whether you believe the 70% failure rate of org change initiatives, the fact is that many or most organizational change initiatives don’t achieve the desired outcomes. As we explore this more, we understand that it requires a ‘ground war, not an air war’ as Sutton & Rao put it in Scaling Up Excellence. And I’ll posit that there’s more.

This isn’t unknown; regardless of label, the folks who are responsible for such initiatives typically argue that that it’s a process. Yet orgs still look for the simple packaged program that will turn things around. And while it’s understandable, it’s decreasingly likely to work. It takes a system approach.

And what I haven’t seen, and I’m willing to hear of one, is a comprehensive program that addresses the full suite of skills and culture together that constitute a coherent organization. And that’s a non-trivial compendium of elements. There are the cultural elements, and skills, and tools, and more. PKM, WoL (SyW), 70:20:10, teaming, collaborating and communicating, etc, are all elements, but they need to be tied together.

My point, I guess, is that there needs to be an entry point, but also a plan to develop the full suite of skills and move the culture. And, like most meta-learning, it needs to be done around something. So you need a concrete focus to start, some problem you’re working on that you’ll do in the new way, and practice the processes and develop the competencies and culture as you go. For the org, it should be a necessary new extension to the organization’s competencies. For L&D, it should be first applied for some L&D project.

In both cases it needs a plan and support for acquisition. And include a realistic time frame for starting, and then spreading. It’s not simple, but it’s necessary. Anything else, I fear, is truly insufficient.

6 October 2017

Two good books on learning

Clark @ 8:10 AM

In addition to the existing good books out there (Julie Dirksen’s Design for How People Learn, Patti Schank’s new series, e.g. her book on Practice & Feedback, & Brown, et al’s Make it Stick), I was pointed to two others. One I’d heard about but hadn’t gotten to yet, the other was new to me. And now that I’ve finished them, both are worth recommending and adding to your reading list.

Benedict Carey’s How We Learn is an accessible overview of the science of learning. As a journalist (not a scientist), he documents his own unlikely journey as a learner, and how that matches up with what’s known. His idiosyncratic study habits, he discovers, are actually not that far off from what really does work for learning (as opposed to passing tests, and that’s an important distinction).  He includes practical implications and maintains a motivating style to help others to put the practice advice to work. His point, it’s what you do as much as how.

This is a book to give to learners to help them understand themselves as learners. The colloquial style and personal anecdotes make the messages comprehensible and relevant. The book includes a full suite of advice about how to learn best.  While it may be hard to convince learners to read a book on learning, this may well be the most valuable investment they can make.

On the other hand, Anders Ericsson’s Peak is very much the translated (co-authored by Robert Pool, a journalist) science book. It’s full of revelations, but laid out with scientific experiments to complement a very thorough set of case studies. What it does beautifully is unmask the myth of ‘native talent’ and unpack the details that lead to expertise. And those details, specifically are about deliberate practice.  

Most importantly, in my mind, is the summary that points out that our focus should not be so much on expert performance but instead on helping so many achieve meaningful levels of ability that they’ve been turned off to by bad stories.  Too often people will say “I can’t do math” and instead such abilities can be developed wonderfully. This book, while relevant to individuals, has much more insight to provide for learning designers.  It separates out why you want models like activity-based learning.  And why what we do too often in classrooms and online aren’t helpful.

I’d put these near the top of my recommended reading lists.

5 October 2017

So I was, at least partly, wrong

Clark @ 8:08 AM

A number of years ago, I wrote that pre-testing learners was user abusive (with a caveat). My argument was sensible, qualified, but apparently wrong. Now that I’ve more of the story, it’s time to rectify my mistake. Of course, there are still remaining questions ;).

My claim was that while pre-testing might have some small benefit, forcing users to test on things they don’t know isn’t nice.  Moreover, I attributed that benefit to activating relevant material, and suggested that there were more humane ways to do it. However, if the pre-test could show that learners did know it, and so be able to skip it, it’d be worthwhile.

However, research has now shown more benefits to pre-testing. That is, causing learners to search for information they don’t have somehow makes the memory traces more susceptible to successful learning subsequently.  Without a full neurological explanation, it appears that the activation goes deeper than just associative awakening. It also appears to be for more than just memory, but actual performance.

This, then, argues that pre-testing is a good thing. Now, I haven’t been able to find a comparison where this pre-testing was compared to a compelling story or question that didn’t require an actual response. Still, I’m willing to believe that the actual requirement for search in a test is more powerful than mere related stories.

And this also makes the case stronger, in my mind, for problem-based learning. That is, if you’re faced with a problem you don’t know the answer to (and it’s a comprehensive question representing the overall learning goal), both the need to look for the answer and (ideally) a compelling story in which it’s important make a good case for the learning to be more effective.

Which doesn’t mean I don’t still feel it’s abusive, but it’s in a good cause.  And it still could be that the learner doesn’t actually have to take a ‘test’, but instead in some less formal way is asked to retrieve the answer.  And it might not.

Regardless, I feel obligated to change my opinion when data contravenes, even in part, a story I previously believed. And it doesn’t even hurt much ;).  Here’s to good design!

« Previous PageNext Page »

Powered by WordPress