Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Patty McCord Litmos Keynote Mindmap

19 September 2017 by Clark Leave a Comment

Patty McCord, famous for the Netflix Culture Deck, spoke on culture. She talked about sharing the stage with sports coaching legends, and how they were personal but focused. Her stories of the early days of Netflix and how they made tough but fair decisions were peppered with important lessons.

Keynote mindmap

Mark Kelly C3 Keynote Mindmap

19 September 2017 by Clark Leave a Comment

Astronaut Mark Kelly gave a warm, funny, and inspiring talk.  He used stories from his youth, learning to fly, becoming an astronaut, and being husband to Gabby Gifford to emphasize key success factors.

(I confess that owing to his style of elocution, punctuating stories with very pithy comments, I may have missed a point or two at the beginning until I picked up on it.)

 

AI Reflections

15 September 2017 by Clark Leave a Comment

Last night I attended a session on “Our Relationship with AI” sponsored by the Computer History Museum and the Partnership on AI. In a panel format, noted journalist John Markoff moderated Apple‘s Tom Gruber, AAAI President Subbarao Kambhampati, and IBM Distinguished Research Scientist Francesca Rossi. The overarching theme was:  how are technologists, engineers, and organizations designing AI tools that enable people and devices to understand and work with each other?

It was an interesting session, with the conversation ranging from what AI is, to what it could and should be used for, and how to develop it in appropriate ways. Addresses were concerns about AI’s capability, roles, and potential misuses.  Here I’m presenting just a couple of thoughts triggered, as I’ve previously riffed on IA (Intelligence Augmentation) and Ethics.

One of the questions that arose was whether AI is engineering or science. The answer, of course, is both. There’s ongoing research on how to get AI to do meaningful things, which is the science part. Here we might see AI that can learn to play video games.  Applying what’s currently known to solve problems is the engineering part, like making chatbots that can answer customer service questions.

On a related note was what  can AI do.  Put very simply, the proposal was that AI could do what you can make a judgment on in a second. So, whether what you see is a face, or whether a claim is likely to be fraudulent.  If you can provide a good (large) training set that says ‘here’s the input, and this is what the output should be’, you can train a system to do it.  Or, in a well-defined domain, you can say ‘here are the logical rules for how to proceed’, and build that system.

The ability to do these tasks, was another point, is what leads to fear. “Wow, they can be better than me at this task, how soon will they be better than me on many tasks?”  The important point made is that these systems can’t generalize beyond their data or rules.  They can’t say: ‘oh I played this video driving game so now I can drive a car’.

Which means that the goal of artificial  general intelligence, that is, a system that can learn and reason about the real world, is still an unknown distance away.  It would either have to have a full set of  knowledge about the world,  or you’d have to have both the capacity and the experience that a human learns from (starting as a baby).  Neither approach has demonstrated any approach of being close.

A side issue was that of the datasets.  It turns out that datasets can have or learn implicit biases. A case study was mentioned how Asian faces triggered ‘blinking’ warnings, owing to the typical eye shape. And this was from an Asian company!  Similarly, word recognition ended up biasing woman towards associations with kitchens and homes, compared to men.  This raises a big issue when it comes to making decisions: could loan-offerings, fraud-detection, or other applications of machine learning inherit bias from datasets?  And if so, how do we address it?

Similarly, one issue was that of trust. When do we trust an AI algorithm?  One suggestions was that it would come through experience (repeatedly seeing benevolent decisions or support).  Which wouldn’t be that unusual. We might also employ techniques that work with humans: authority of the providers, credentials, testimonials, etc. One of my concerns then was could that be misleading: we trust one algorithm, and then transfer that trust (inappropriately) to another?  That wouldn’t be  unknown in human behavior either.  Do we need a whole new set of behaviors around NPCs? (Non Player Characters, a reference to game agents that are programmed, not people.)

One analogy that was raised was to the industrial age. We started replacing people with machines. Did that mean a whole bunch of people were suddenly out of work?  Or did that mean new jobs emerged to be filled?  Or, since we’re now doing human-type tasks, will there be less tasks overall? And if so, what do we do about it?  It clearly should be a conscious decision.

It’s clear that there are business benefits to AI. The real question, and this isn’t unique to AI but happens with all technologies, is how we decide to incorporate the opportunities into our systems. So, what do you think are the issues?

 

Why AR

13 September 2017 by Clark Leave a Comment

Perhaps inspired by Apple’s focus on Augmented Reality (AR), I thought I’d take a stab at conveying the types of things that could be done to support both learning and performance. I took a sample of some of my photos and marked them up.  I’m sure there’s lots more that  could be done (there were some great games), but I’m focusing on simple information that I would like to see. It’s mocked up (so the arrows are hand drawn), so understand I’m talking concept here, not execution!

Magnolia

Here, I’m starting small. This is a photo I took of a flower on a walk. This is the type of information I might want while viewing the flower through the screen (or glasses).  The system could tell me it’s a tree, not a bush, technically (thanks to my flora-wise better half).  It could also illustrate how large it is.  Finally, the view could indicate that what I’m viewing is a magnolia (which I wouldn’t have known), and show me off to the right the flower bud stage.

The point is that we can get information around the particular thing we’re viewing. I might not actually care about the flower bud, so that might be filtered out, and it might instead talk about any medicinal uses.  Also, it could be dynamic, animating the process of going from bud to flower and falling off. It could also talk about the types of animals (bees, hummingbirds, ?) that interact with it, and how. It would be dependent on what  I  want to learn.  And, perhaps, with some additional incidental information on the periphery of my interests, for serendipity.

Neighborhood viewGoing wider, here I’m looking out at a landscape, and the overlay is providing directions. Downtown is straight ahead, my house is over that ridge, and infamous Mt. Diablo is off to the left of the picture. It could do more, pointing out that the green ridges are grapes, provide the name of the neighborhood that’s in the foreground (I call it Stepford Downs, after the movie ;).

Dynamically, of course, if I moved the camera to the left, Mt. Diablo would get identified when it sprung into view.  As we moved around, we’d point to the neighboring towns in view, and in the direction of further towns blocked by mountain ranges.  We should or could also identify the river flowing past to the north.  And we could instead focus on other information: infrastructure (pipes and electricity), government boundaries, whatever’s relevant could be filtered in or out.

Road picAnd in this final example, taken from the car on a trip, AR might indicate some natural features. Here I’ve pointed to the clouds (and indicated the likelihood of rain). Similarly, I’ve identified the rock and the mechanism of shaping. (These are all made up, they could be wrong; Mt Faux definitely is!)  We might even be able to touch on a label and have it expand.

Similarly, as we moved, information would change as we viewed different areas. We might even animate what the area looked like hundreds of thousands of years ago and how it’s changed.  Or we could illustrate coming changes. It could instead show boundaries of counties or parks, types of animals, or other relevant information.

The point here is that annotating the world, a capability AR has, can be an amazing learning tool. If I can specify my interests, we can capitalize on them to develop. And this is as an adult. Think about doing this for kids, layering on information in their Zone of Proximal Development  and  interests!  I know VR’s cool, and has real learning potential, but there you have to  create the context. Here we’re taking advantage of it. That may be harder, but it’s going to have some real upsides when it can be done ubiquitously.

Developing L&D

7 September 2017 by Clark Leave a Comment

One of the conversations I’ve been having is how to shift organizations into modern workplace learning. These discussions have not been with L&D, but instead targeted directly at organizational strategy. The idea is to address a particular tactical goal as  part of a strategic plan, and to do so in ways that both embody and develop learning and a collaboration culture. The topic was then raised about how you’d approach an L&D unit under this picture. And I wondered whether you’d use the same approach to developing L&D as part of L&D operations. The answer isn’t obvious.

So what I’m talking about here would be to take an L&D initiative, and do it in this new way, with coaching and scaffolding. The overall model involves a series of challenges with support.  You’re developing some new organizational capability, and you’d scaffold the process initially with some made up or pre-existing challenges.  Then you gradually move to real challenges. So, does this model change for L&D?

My thought was that you’d take an L&D initiative, and something out of the ordinary, an experiment.  Depending on the particular organization’s context, it might be performance support, or social media, or mobile, or…  Then you define an experiment, and start working on it. To develop the skills to execute, you give a team (or teams) some initial challenges: e.g. critique a design. Then more complex ones, so: design a solution to a problem someone else has solved. Finally, you give them the real task, and let them go (with support).

This isn’t slow; it’s done in sprints, and still fits in between other work. It can be done in a matter of weeks.  In doing so, you’re having the team collaborate with digital tools (even if/while working F2F, but ideally you have a distributed team). Ultimately, you are developing both their skills on the process itself  and on working together in collaborative ways.

In talking this through, I think this makes sense for L&D as well, as long as it’s a new capability that’s being developed.  This is an approach that can rapidly develop new tactical skills and change to a culture oriented towards innovation: experimentation and iterative moves. This is the future, and yet it’s unlike most of the way L&D operates now.

Most importantly, I think, is that this opportunity is on the table now for a brief period. L&D can internally develop their understanding and ability of the new ways of working as a step towards being an organization-wide champion. The same approach taken within  L&D then can be taken and used elsewhere.  But it takes experience with this approach before you can scale it.  Are you ready to make the shift?

Metaphors for L&D

5 September 2017 by Clark 4 Comments

What do you see the role of L&D being in the organization?  Metaphors are important, as they form a basis for inferences of what fits. We frame  our conversations by the metaphors we use, and these frames guide what’s allowed conversation and what’s not.  To put it another way, metaphors are the basis for mental models that explain and predict what happens.  But metaphors and models simplify things, making certain things ‘invisible’.  Thus, our metaphors can keep us from seeing things that might be relevant.

LEARNING & development

Thus, we should examine the metaphors we’re using in L&D.  We can start, of course, even with the term L&D: Learning & Development.  Typically, it’s the ‘learning’ part that dominates: we’re talking about helping people learn. And this metaphor implies: courses. Yet, we know that formal learning is only part of the picture of full development of capability. So the ‘development’ part should play a role, including coaching and the choice of assignments. Perhaps also meta-learning.  Though I’d suggest that these latter bits aren’t prominent, because learning  can  be a mechanism for development, and therefore the following steps lag. Which is why movements like 70:20:10 can be helpful in awakening a broader emphasis.

However, there’s more. In  Revolutionize Learning & Development, I argued that we should switch the term to  P&D, Performance & Development. Here I was trying to recognize that our learning has a goal: the ability to perform. Also, there are other paths to performance, including performance support.  I still wanted development, including formal learning, but we also want to develop the ability for the organization to continue to learn: innovation.  And I’m not claiming that this can break the problem with learning, as P&D might end up only emphasizing on performance, as L&D ends up only emphasizing learning.

The point being is that we need to have a perspective that doesn’t limit our vision. It’s the case that L&D  could be just about courses, but I want to suggest that’s not optimal.  A ‘course’ perspective allows the focus to be on the delivery, not on the outcome. With more ability for individuals to learn on their own, traditional courses are likely to wither.  I think it’s a path to irrelevance.

I’ll suggest that we want to be thinking about all the ways that an organization can facilitate doing, and increasing the ability to do. Then we should figure out what parts we can contribute to. If, as I suggest, we want to be professional about understanding learning, then we have a basis to be the best people to guide all of it.

So I don’t know the best metaphor.  What I do believe is that ‘course’, and even ‘learning’ can be limiting. (I’ve also thought that ‘talent development’ is not sufficient.) I’ve suggested P&D, but perhaps it’s organic and about organizational growth. Or perhaps it’s about performance and increasing. So, now, it’s over to you: what do you think would be a helpful way to look at it. Do we need a rebranding, and if so, to what?

Evidence-based L&D

31 August 2017 by Clark Leave a Comment

Conducting ScienceEarlier this year, I wrote that L&D was a ‘Field of Dreams‘ industry, running on a belief that “if you build it, it is good”.  There’s strong evidence that we’re not delivering on the needs of the organization. So what  is a good basis for finding ways to support people in the moment  and develop them over time?  We want to look to what research and theory tell us .  In short, I think L&D should be evidence-based.

What  does  the evidence say?  There are a number of places where we can look, but first we have to figure out what we  can (and should) be doing.  I suggest that L&D isn’t doing near what it could and should, and what it  is doing, it is doing badly.  So let’s start with that latter.

One thing L&D should be doing is making learning experiences that have organizational impact.  There’s evidence that organizations that measure impact, do better. There’s also evidence that there are principles on which to design learning that leads to better outcomes.  Yet, despite signups for the eLearning Manifesto, there’s still evidence that organizations aren’t following those principles, if extant elearning is any indication. Similarly, the number of L&D units actually measuring their impact on organizational metrics seems to be lagging those that, for instance, just use ‘smile sheets‘. And even those are done badly.

There’s also an argument that L&D could and should be considering performance support as well. There are certainly instances where, as I’ve heard it said (and I’m paraphrasing, I can’t find the original quote): “inside every course there’s a lean job aid waiting to get out”. Certainly, performance can improve with a job aid instead of training (c.f. Atul Gawande’s  Checklist Manifesto).

Further actions by L&D include facilitating communication and collaboration. Again, organizations that become learning organizations succeed better than those that don’t. The elements of a learning organization include the skills around working together and a culture where doing so can flourish.  We know what makes brainstorming work, and more.

In short, there’s a vast body of evidence about how to do things right. It’s time to become professionals, and pay attention. In that sense, we’re organizational learning engineers. While there may be a lack of evidence about the linkage between individual learning and organizational learning, we do know a lot about facilitating each.  And we should.  Are you ready?

 

Coping with Cognition

30 August 2017 by Clark Leave a Comment

Our brains are amazing things. They make sense of the world, and have developed language to help us both make better sense together and to communicate our learnings. And yet, this same amazing architecture has some vulnerabilities too. And I just fell prey to one, and it’s making reflect on what we can do, and what we still can’t. Our cognition is powerful, but also limited.

So, yesterday I had a great idea for a post for today. Now, I multi-task, and I have several things going at once. I have strategies to get these things done despite the fact that multi-tasking doesn’t work. So for one, I have a specific goal for several of the projects each day. I write tasks for projects into a project management tool. I even keep windows open to remind me of things to do. And I write non-project oriented tasks into a separate ToDo list.  But…

I didn’t document the blog post idea before I did something else, and got distracted by one of my open projects. I don’t know which, but I lost the post.  Many times, I can regenerate it, but this time I couldn’t.

See, our brain has limitations, and one of them is a limited working memory. And we have evolved powerful tools to support those gaps, including those mentioned above. But we can’t capture all of them.  Will we be able to? Unless I consciously acted  at the time to do something, whether asked Siri to note it, or made a note, those ephemeral thoughts can escape.  And I’m not sure that’s a bad thing.

The flaws in our thinking actually have advantages.  We can let go ideas to deal with new ones. And we can miss things because we’re focusing on something. That’s the power of our architecture.  And if we focus on the power, and scaffold as much as we can, and let go what we can’t, we really shouldn’t ask for more.

Our ability to scaffold continues to get better. AI, better interfaces, more processing power, better device interoperations, and smaller and more capable sensors are all ongoing. We’re learning more about putting that to use by via innovation.  And yet we’ll still have gaps. I think we should be ok with that. Serendipity and experimentation mean we’ll have unintended consequences, and generally those may be bad, but every once in a while they may be better. And we can’t find that without some ‘wildness’ (which is also an argument for nature conservation).  So I’m trying to not get too upset.  I’m cutting our cognition some slack. Let’s not lose the ability to be human.

Extending Engagement

24 August 2017 by Clark 1 Comment

My post on why ‘engagement’ should be added to effective and efficient led to some discussion on LinkedIn. In particular, some questions were asked that I thought I should reflect on.  So here are my responses to the issue of how to ‘monetize’ engagement, and how it relates to the effectiveness of learning.

So the first issue was how to justify the extra investment engagement would entail. It was an assumption that it  would take extra investment, but I believe it will. Here’s why. To make a learning experience engaging, you need some additional things: knowing why this is of interest and  relevance  to practitioners, and putting that into the introduction, examples, and practice.  With practice, that’s going to come with only a marginal overhead. More importantly, that is part of also making it more effective. There  is some additional information needed, and more careful design, and that  certainly is more than most of what’s being done now. (Even if it should be.)

So why would you put in this extra effort?  What are the benefits? As the article suggested, the payoffs are several:

  • First, learners know more intrinsically why they should pay attention. This means they’ll pay more attention, and the learning will be more effective. And that’s valuable, because it should increase the outcomes of the learning.
  • Second, the practice is distributed across more intriguing contexts. This means that the practice will have higher motivation.  When they’re performing, they’re motivated because it  matters. If we have more motivation in the learning practice, it’s closer to the performance context, so we’re making the transfer gap smaller. Again, this will make the learning more effective.
  • Third, that if you unpack the meaningfulness of the examples, you’ll make the underlying thinking easier to assimilate. The examples are comprehended better, and that leads to more effectiveness.

If learning’s a probabilistic game (and it is), and you increase the likelihood of it sticking, you’re increasing the return on your investment. If the margin to do it right is less than the value of the improvement in the learning, that’s a business case. And I’ll suggest that these steps are part of making learning effective,  period. So it’s really going from a low likelihood of transfer – 20-30% say – to effective learning – maybe 70-80%.  Yes, I’m making these numbers up, but…

This is really all part of going from information dump & knowledge test to elaborated examples and contextualized practice.  So that’s really not about engagement, it’s about effectiveness. And a lot of what’s done under the banner of ‘rapid elearning’ is ineffective.  It may be engaging, but it isn’t leading to new skills.

Which is the other issue: a claim that engagement doesn’t equal better learning. And in general I agree (see: activity doesn’t mean effectiveness in a social media tool). It depends on what you mean by engagement; I don’t mean trivialized scores equalling more activity. I mean fundamental cognitive engagement: ‘hard fun’, not just  fun.  Intrinsic relevance. Not marketing flare, but real value add.

Hopefully this helps!  I really want to convince you that you want deep learning design if you care about the outcomes.  (And if you don’t, why are you bothering? ;).  It goes to effectiveness, and requires addressing engagement. I’ll also suggest that while it  does affect efficiency,  it does so in marginal ways compared to substantial increases in impact.  And that strikes me as the type of step one  should be taking. Agreed?

 

Dual OS or Teams of Teams?

23 August 2017 by Clark Leave a Comment

I asked this question in the L&D Revolution LinkedIn group I have to support the Revolutionize L&D book, but thought I’d ask it here as well. And I’ve asked it before, but I have some new thoughts based upon thinking about McChrystal’s Team of Teams. Do we use a Dual Operating System (Dual OS), with hierarchy being used as a base to pull out teams for innovation, or do we go with a fully podular model?

In a Dual OS org, the hierarchy continues to exist for doing the work that is known that needs to be done. Kotter pulls out select members to create teams to attack particular innovation elements.  These teams change over time, so people are cycled back to work and new folks are infused with the innovation approach.

My question here is whether this really creates an entire culture of innovation. In both Keith Sawyer’s  Group Genius and Stephen Johnson’s  Where Do Good Ideas Come From, real innovation bubbles along, requiring time and serendipity. You can get innovative solutions for known problems from teams, but for new insights you need an ongoing environment for ideas to emerge, collide, percolate/incubate/ferment.  How do you get that going across the organization?

On the other hand, looking at the military, there’s a huge personnel development infrastructure that prepares people to be members of the elite teams. Individuals from these teams intermix to get the needed adaptivity, but it’s based upon a fixed foundation. And there are still many hierarchical mechanisms organized to support the elite work.  So is it really a fully teamed approach?

As I write this, it sounds like you do need the Dual OS, and I’m willing to believe it.  My continuing concern again is what fosters the ongoing innovation?  Can you have an innovative hierarchy as well? Can you have a hierarchy with a culture of experimentation, accepting mistakes, etc? How do the small innovations in operating process occur along with the major strategic shifts?  My intuitions go towards creating teams of teams, but  completely. I do believe everyone’s capable of innovation, and in the right atmosphere that can happen. I don’t think it’s separate, I believe it has to be intrinsic and ubiquitous.  The question is, what structure achieves this?  And I haven’t seen the answer yet.  Have you?  Perhaps we still have some experimentation to do ;).

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok