Learnlets
Clark Quinn's Learnings about Learning
(The Official Quinnovation blog)

20 September 2017

Transparency

Clark @ 8:09 AM

I believe that transparency is a good thing. It builds trust, as it makes it hard to hide things.  And trust is important. So, in the spirit of transparency, it occurred to me to share a little bit about me and this blog. Here I lay out who I am, why I write it, and what I write about.

You can find out more via the ‘about Clark Quinn’ link in the right column, but in brief, I saw the connection between computing and learning as an undergraduate, and it’s been my career ever since. It’s not just my vocation, but it’s my avocation: I enjoy exploring cognition and technology. And while I’ve done the science and track it, what I revel in (and have demonstrable capability for), is applying cognitive and learning science to create new approaches and fine-tune existing ones.  Learning engineering, if you will.

And, for a variety of reasons, I do this as a consultant. I make my living providing strategic guidance for clients.  I speak at events, and write books, but my main income is from consulting. Which means you should hire me.  I assist organizations to improve their processes and products, both tactically and strategically. My clients have been happy, and find it’s good value. What you get are unique ideas that are practical and yet effective. Ideas you aren’t likely to have come up with, but are valuable. I really do Quinnovate! Check out the Quinnovation site for more.  Of course, I do have to live in the real world, and so I need to find ways to do this that are mutually beneficial.

Yet generating business isn’t why I write this blog.  I started writing this blog as an experiment and originally tried to write 5 days a week (but was happy if that ended up being 2-3 times a week).  My commitment now is 2 per week (which rarely yields 1 or 3).  And I haven’t monetized it: there’s no advertising, and while I occasionally talk about where I’m speaking or the like, I haven’t used this as a way to sell things. Hopefully that can continue.

So, the reason I write is to think ‘out loud’.  It’s largely for me: it makes me think. I’m just always curious! I’ve previously recounted the story about how I was on a panel answering questions from the audience, and one of my fellow panelists commented that I had an answer for everything. And the reason is in the ongoing attempt to populate the blog, I’ve looked at lots of things. As my client engagements have been in many different areas, I also have wide-ranging experience to draw upon.  And I just naturally reflect, but getting concrete: diagramming and/or writing, provides additional benefits.

Thus, the process of continually writing (for over 10 years now) means I’m looking at lots of things, reflecting on them, and sharing my thoughts. I also make a point to look at related fields, and look for connections. I also look at what’s happening with technology. In general, I look with a critical eye, as I was trained as a scientist.  I think that’s valuable as well, because there still is a lot of nonsense trotted out, and there’s always some new buzzword that’s being loosely tossed about. Blogging’s given me cause to continue to tune my thinking, and at least some folks have commented that they’ve found it useful.

Mostly I write about things related to technology, learning, and individual and organizational implications. It includes diversions to innovation, design, wisdom, performance support, and the like, because they’ve implications for practice. In many ways I see approaches that aren’t well aligned with how we think, work, and learn, and that strikes me as both a shame, and an opportunity to improve. And that’s what I enjoy, finding ways to improve what we do.

So that’s it: I blog to facilitate my understanding, because cognitive science and technology is my passion. It isn’t a direct business move.  I do need to make a living, and prefer to do it in the area of my passion, and fortunately have been successful so far.  (Which isn’t to say you shouldn’t find a reason to use me, there are never enough opportunities to assist in improvement, and I’m not a sales person ;).  And yes, this life is a learning experience all in itself!  I hope this is clear, but in the interests of transparency I welcome your inquiries and comments. Stay curious, my friends.

 

19 September 2017

Patty McCord Litmos Keynote Mindmap

Clark @ 3:53 PM

Patty McCord, famous for the Netflix Culture Deck, spoke on culture. She talked about sharing the stage with sports coaching legends, and how they were personal but focused. Her stories of the early days of Netflix and how they made tough but fair decisions were peppered with important lessons.

Keynote mindmap

Mark Kelly C3 Keynote Mindmap

Clark @ 1:06 PM

Astronaut Mark Kelly gave a warm, funny, and inspiring talk.  He used stories from his youth, learning to fly, becoming an astronaut, and being husband to Gabby Gifford to emphasize key success factors.

(I confess that owing to his style of elocution, punctuating stories with very pithy comments, I may have missed a point or two at the beginning until I picked up on it.)

 

15 September 2017

AI Reflections

Clark @ 8:07 AM

Last night I attended a session on “Our Relationship with AI” sponsored by the Computer History Museum and the Partnership on AI. In a panel format, noted journalist John Markoff moderated Apple’s Tom Gruber, AAAI President Subbarao Kambhampati, and IBM Distinguished Research Scientist Francesca Rossi. The overarching theme was: how are technologists, engineers, and organizations designing AI tools that enable people and devices to understand and work with each other?

It was an interesting session, with the conversation ranging from what AI is, to what it could and should be used for, and how to develop it in appropriate ways. Addresses were concerns about AI’s capability, roles, and potential misuses.  Here I’m presenting just a couple of thoughts triggered, as I’ve previously riffed on IA (Intelligence Augmentation) and Ethics.

One of the questions that arose was whether AI is engineering or science. The answer, of course, is both. There’s ongoing research on how to get AI to do meaningful things, which is the science part. Here we might see AI that can learn to play video games.  Applying what’s currently known to solve problems is the engineering part, like making chatbots that can answer customer service questions.

On a related note was what can AI do.  Put very simply, the proposal was that AI could do what you can make a judgment on in a second. So, whether what you see is a face, or whether a claim is likely to be fraudulent.  If you can provide a good (large) training set that says ‘here’s the input, and this is what the output should be’, you can train a system to do it.  Or, in a well-defined domain, you can say ‘here are the logical rules for how to proceed’, and build that system.

The ability to do these tasks, was another point, is what leads to fear. “Wow, they can be better than me at this task, how soon will they be better than me on many tasks?”  The important point made is that these systems can’t generalize beyond their data or rules.  They can’t say: ‘oh I played this video driving game so now I can drive a car’.

Which means that the goal of artificial general intelligence, that is, a system that can learn and reason about the real world, is still an unknown distance away.  It would either have to have a full set of  knowledge about the world, or you’d have to have both the capacity and the experience that a human learns from (starting as a baby).  Neither approach has demonstrated any approach of being close.

A side issue was that of the datasets.  It turns out that datasets can have or learn implicit biases. A case study was mentioned how Asian faces triggered ‘blinking’ warnings, owing to the typical eye shape. And this was from an Asian company!  Similarly, word recognition ended up biasing woman towards associations with kitchens and homes, compared to men.  This raises a big issue when it comes to making decisions: could loan-offerings, fraud-detection, or other applications of machine learning inherit bias from datasets?  And if so, how do we address it?

Similarly, one issue was that of trust. When do we trust an AI algorithm?  One suggestions was that it would come through experience (repeatedly seeing benevolent decisions or support).  Which wouldn’t be that unusual. We might also employ techniques that work with humans: authority of the providers, credentials, testimonials, etc. One of my concerns then was could that be misleading: we trust one algorithm, and then transfer that trust (inappropriately) to another?  That wouldn’t be  unknown in human behavior either.  Do we need a whole new set of behaviors around NPCs? (Non Player Characters, a reference to game agents that are programmed, not people.)

One analogy that was raised was to the industrial age. We started replacing people with machines. Did that mean a whole bunch of people were suddenly out of work?  Or did that mean new jobs emerged to be filled?  Or, since we’re now doing human-type tasks, will there be less tasks overall? And if so, what do we do about it?  It clearly should be a conscious decision.

It’s clear that there are business benefits to AI. The real question, and this isn’t unique to AI but happens with all technologies, is how we decide to incorporate the opportunities into our systems. So, what do you think are the issues?

 

13 September 2017

Why AR

Clark @ 8:07 AM

Perhaps inspired by Apple’s focus on Augmented Reality (AR), I thought I’d take a stab at conveying the types of things that could be done to support both learning and performance. I took a sample of some of my photos and marked them up.  I’m sure there’s lots more that could be done (there were some great games), but I’m focusing on simple information that I would like to see. It’s mocked up (so the arrows are hand drawn), so understand I’m talking concept here, not execution!

Magnolia

Here, I’m starting small. This is a photo I took of a flower on a walk. This is the type of information I might want while viewing the flower through the screen (or glasses).  The system could tell me it’s a tree, not a bush, technically (thanks to my flora-wise better half).  It could also illustrate how large it is.  Finally, the view could indicate that what I’m viewing is a magnolia (which I wouldn’t have known), and show me off to the right the flower bud stage.

The point is that we can get information around the particular thing we’re viewing. I might not actually care about the flower bud, so that might be filtered out, and it might instead talk about any medicinal uses.  Also, it could be dynamic, animating the process of going from bud to flower and falling off. It could also talk about the types of animals (bees, hummingbirds, ?) that interact with it, and how. It would be dependent on what want to learn.  And, perhaps, with some additional incidental information on the periphery of my interests, for serendipity.

Neighborhood viewGoing wider, here I’m looking out at a landscape, and the overlay is providing directions. Downtown is straight ahead, my house is over that ridge, and infamous Mt. Diablo is off to the left of the picture. It could do more, pointing out that the green ridges are grapes, provide the name of the neighborhood that’s in the foreground (I call it Stepford Downs, after the movie ;).

Dynamically, of course, if I moved the camera to the left, Mt. Diablo would get identified when it sprung into view.  As we moved around, we’d point to the neighboring towns in view, and in the direction of further towns blocked by mountain ranges.  We should or could also identify the river flowing past to the north.  And we could instead focus on other information: infrastructure (pipes and electricity), government boundaries, whatever’s relevant could be filtered in or out.

Road picAnd in this final example, taken from the car on a trip, AR might indicate some natural features. Here I’ve pointed to the clouds (and indicated the likelihood of rain). Similarly, I’ve identified the rock and the mechanism of shaping. (These are all made up, they could be wrong; Mt Faux definitely is!)  We might even be able to touch on a label and have it expand.

Similarly, as we moved, information would change as we viewed different areas. We might even animate what the area looked like hundreds of thousands of years ago and how it’s changed.  Or we could illustrate coming changes. It could instead show boundaries of counties or parks, types of animals, or other relevant information.

The point here is that annotating the world, a capability AR has, can be an amazing learning tool. If I can specify my interests, we can capitalize on them to develop. And this is as an adult. Think about doing this for kids, layering on information in their Zone of Proximal Development and interests!  I know VR’s cool, and has real learning potential, but there you have to create the context. Here we’re taking advantage of it. That may be harder, but it’s going to have some real upsides when it can be done ubiquitously.

7 September 2017

Developing L&D

Clark @ 8:05 AM

One of the conversations I’ve been having is how to shift organizations into modern workplace learning. These discussions have not been with L&D, but instead targeted directly at organizational strategy. The idea is to address a particular tactical goal as part of a strategic plan, and to do so in ways that both embody and develop learning and a collaboration culture. The topic was then raised about how you’d approach an L&D unit under this picture. And I wondered whether you’d use the same approach to developing L&D as part of L&D operations. The answer isn’t obvious.

So what I’m talking about here would be to take an L&D initiative, and do it in this new way, with coaching and scaffolding. The overall model involves a series of challenges with support.  You’re developing some new organizational capability, and you’d scaffold the process initially with some made up or pre-existing challenges.  Then you gradually move to real challenges. So, does this model change for L&D?

My thought was that you’d take an L&D initiative, and something out of the ordinary, an experiment.  Depending on the particular organization’s context, it might be performance support, or social media, or mobile, or…  Then you define an experiment, and start working on it. To develop the skills to execute, you give a team (or teams) some initial challenges: e.g. critique a design. Then more complex ones, so: design a solution to a problem someone else has solved. Finally, you give them the real task, and let them go (with support).

This isn’t slow; it’s done in sprints, and still fits in between other work. It can be done in a matter of weeks.  In doing so, you’re having the team collaborate with digital tools (even if/while working F2F, but ideally you have a distributed team). Ultimately, you are developing both their skills on the process itself and on working together in collaborative ways.

In talking this through, I think this makes sense for L&D as well, as long as it’s a new capability that’s being developed.  This is an approach that can rapidly develop new tactical skills and change to a culture oriented towards innovation: experimentation and iterative moves. This is the future, and yet it’s unlike most of the way L&D operates now.

Most importantly, I think, is that this opportunity is on the table now for a brief period. L&D can internally develop their understanding and ability of the new ways of working as a step towards being an organization-wide champion. The same approach taken within L&D then can be taken and used elsewhere. But it takes experience with this approach before you can scale it.  Are you ready to make the shift?

5 September 2017

Metaphors for L&D

Clark @ 8:02 AM

What do you see the role of L&D being in the organization?  Metaphors are important, as they form a basis for inferences of what fits. We frame our conversations by the metaphors we use, and these frames guide what’s allowed conversation and what’s not.  To put it another way, metaphors are the basis for mental models that explain and predict what happens.  But metaphors and models simplify things, making certain things ‘invisible’.  Thus, our metaphors can keep us from seeing things that might be relevant.

LEARNING & development

Thus, we should examine the metaphors we’re using in L&D.  We can start, of course, even with the term L&D: Learning & Development.  Typically, it’s the ‘learning’ part that dominates: we’re talking about helping people learn. And this metaphor implies: courses. Yet, we know that formal learning is only part of the picture of full development of capability. So the ‘development’ part should play a role, including coaching and the choice of assignments. Perhaps also meta-learning.  Though I’d suggest that these latter bits aren’t prominent, because learning can be a mechanism for development, and therefore the following steps lag. Which is why movements like 70:20:10 can be helpful in awakening a broader emphasis.

However, there’s more. In Revolutionize Learning & Development, I argued that we should switch the term to P&D, Performance & Development. Here I was trying to recognize that our learning has a goal: the ability to perform. Also, there are other paths to performance, including performance support.  I still wanted development, including formal learning, but we also want to develop the ability for the organization to continue to learn: innovation.  And I’m not claiming that this can break the problem with learning, as P&D might end up only emphasizing on performance, as L&D ends up only emphasizing learning.

The point being is that we need to have a perspective that doesn’t limit our vision. It’s the case that L&D could be just about courses, but I want to suggest that’s not optimal.  A ‘course’ perspective allows the focus to be on the delivery, not on the outcome. With more ability for individuals to learn on their own, traditional courses are likely to wither.  I think it’s a path to irrelevance.

I’ll suggest that we want to be thinking about all the ways that an organization can facilitate doing, and increasing the ability to do. Then we should figure out what parts we can contribute to. If, as I suggest, we want to be professional about understanding learning, then we have a basis to be the best people to guide all of it.

So I don’t know the best metaphor.  What I do believe is that ‘course’, and even ‘learning’ can be limiting. (I’ve also thought that ‘talent development’ is not sufficient.) I’ve suggested P&D, but perhaps it’s organic and about organizational growth. Or perhaps it’s about performance and increasing. So, now, it’s over to you: what do you think would be a helpful way to look at it. Do we need a rebranding, and if so, to what?

Powered by WordPress