Learnlets
Clark Quinn's Learnings about Learning
(The Official Quinnovation blog)

31 August 2017

Evidence-based L&D

Clark @ 8:08 AM

Conducting ScienceEarlier this year, I wrote that L&D was a ‘Field of Dreamsindustry, running on a belief that “if you build it, it is good”.  There’s strong evidence that we’re not delivering on the needs of the organization. So what is a good basis for finding ways to support people in the moment and develop them over time?  We want to look to what research and theory tell us .  In short, I think L&D should be evidence-based.

What does the evidence say?  There are a number of places where we can look, but first we have to figure out what we can (and should) be doing.  I suggest that L&D isn’t doing near what it could and should, and what it is doing, it is doing badly.  So let’s start with that latter.

One thing L&D should be doing is making learning experiences that have organizational impact.  There’s evidence that organizations that measure impact, do better. There’s also evidence that there are principles on which to design learning that leads to better outcomes.  Yet, despite signups for the eLearning Manifesto, there’s still evidence that organizations aren’t following those principles, if extant elearning is any indication. Similarly, the number of L&D units actually measuring their impact on organizational metrics seems to be lagging those that, for instance, just use ‘smile sheets‘. And even those are done badly.

There’s also an argument that L&D could and should be considering performance support as well. There are certainly instances where, as I’ve heard it said (and I’m paraphrasing, I can’t find the original quote): “inside every course there’s a lean job aid waiting to get out”. Certainly, performance can improve with a job aid instead of training (c.f. Atul Gawande’s Checklist Manifesto).

Further actions by L&D include facilitating communication and collaboration. Again, organizations that become learning organizations succeed better than those that don’t. The elements of a learning organization include the skills around working together and a culture where doing so can flourish.  We know what makes brainstorming work, and more.

In short, there’s a vast body of evidence about how to do things right. It’s time to become professionals, and pay attention. In that sense, we’re organizational learning engineers. While there may be a lack of evidence about the linkage between individual learning and organizational learning, we do know a lot about facilitating each.  And we should.  Are you ready?

 

30 August 2017

Coping with Cognition

Clark @ 8:03 AM

Our brains are amazing things. They make sense of the world, and have developed language to help us both make better sense together and to communicate our learnings. And yet, this same amazing architecture has some vulnerabilities too. And I just fell prey to one, and it’s making reflect on what we can do, and what we still can’t. Our cognition is powerful, but also limited.

So, yesterday I had a great idea for a post for today. Now, I multi-task, and I have several things going at once. I have strategies to get these things done despite the fact that multi-tasking doesn’t work. So for one, I have a specific goal for several of the projects each day. I write tasks for projects into a project management tool. I even keep windows open to remind me of things to do. And I write non-project oriented tasks into a separate ToDo list.  But…

I didn’t document the blog post idea before I did something else, and got distracted by one of my open projects. I don’t know which, but I lost the post.  Many times, I can regenerate it, but this time I couldn’t.

See, our brain has limitations, and one of them is a limited working memory. And we have evolved powerful tools to support those gaps, including those mentioned above. But we can’t capture all of them.  Will we be able to? Unless I consciously acted at the time to do something, whether asked Siri to note it, or made a note, those ephemeral thoughts can escape.  And I’m not sure that’s a bad thing.

The flaws in our thinking actually have advantages.  We can let go ideas to deal with new ones. And we can miss things because we’re focusing on something. That’s the power of our architecture.  And if we focus on the power, and scaffold as much as we can, and let go what we can’t, we really shouldn’t ask for more.

Our ability to scaffold continues to get better. AI, better interfaces, more processing power, better device interoperations, and smaller and more capable sensors are all ongoing. We’re learning more about putting that to use by via innovation.  And yet we’ll still have gaps. I think we should be ok with that. Serendipity and experimentation mean we’ll have unintended consequences, and generally those may be bad, but every once in a while they may be better. And we can’t find that without some ‘wildness’ (which is also an argument for nature conservation).  So I’m trying to not get too upset.  I’m cutting our cognition some slack. Let’s not lose the ability to be human.

24 August 2017

Extending Engagement

Clark @ 8:09 AM

My post on why ‘engagement’ should be added to effective and efficient led to some discussion on LinkedIn. In particular, some questions were asked that I thought I should reflect on.  So here are my responses to the issue of how to ‘monetize’ engagement, and how it relates to the effectiveness of learning.

So the first issue was how to justify the extra investment engagement would entail. It was an assumption that it would take extra investment, but I believe it will. Here’s why. To make a learning experience engaging, you need some additional things: knowing why this is of interest and relevance to practitioners, and putting that into the introduction, examples, and practice.  With practice, that’s going to come with only a marginal overhead. More importantly, that is part of also making it more effective. There is some additional information needed, and more careful design, and that certainly is more than most of what’s being done now. (Even if it should be.)

So why would you put in this extra effort?  What are the benefits? As the article suggested, the payoffs are several:

  • First, learners know more intrinsically why they should pay attention. This means they’ll pay more attention, and the learning will be more effective. And that’s valuable, because it should increase the outcomes of the learning.
  • Second, the practice is distributed across more intriguing contexts. This means that the practice will have higher motivation.  When they’re performing, they’re motivated because it matters. If we have more motivation in the learning practice, it’s closer to the performance context, so we’re making the transfer gap smaller. Again, this will make the learning more effective.
  • Third, that if you unpack the meaningfulness of the examples, you’ll make the underlying thinking easier to assimilate. The examples are comprehended better, and that leads to more effectiveness.

If learning’s a probabilistic game (and it is), and you increase the likelihood of it sticking, you’re increasing the return on your investment. If the margin to do it right is less than the value of the improvement in the learning, that’s a business case. And I’ll suggest that these steps are part of making learning effective, period. So it’s really going from a low likelihood of transfer – 20-30% say – to effective learning – maybe 70-80%.  Yes, I’m making these numbers up, but…

This is really all part of going from information dump & knowledge test to elaborated examples and contextualized practice.  So that’s really not about engagement, it’s about effectiveness. And a lot of what’s done under the banner of ‘rapid elearning’ is ineffective.  It may be engaging, but it isn’t leading to new skills.

Which is the other issue: a claim that engagement doesn’t equal better learning. And in general I agree (see: activity doesn’t mean effectiveness in a social media tool). It depends on what you mean by engagement; I don’t mean trivialized scores equalling more activity. I mean fundamental cognitive engagement: ‘hard fun’, not just fun.  Intrinsic relevance. Not marketing flare, but real value add.

Hopefully this helps!  I really want to convince you that you want deep learning design if you care about the outcomes.  (And if you don’t, why are you bothering? ;).  It goes to effectiveness, and requires addressing engagement. I’ll also suggest that while it does affect efficiency, it does so in marginal ways compared to substantial increases in impact.  And that strikes me as the type of step one should be taking. Agreed?

 

23 August 2017

Dual OS or Teams of Teams?

Clark @ 8:03 AM

I asked this question in the L&D Revolution LinkedIn group I have to support the Revolutionize L&D book, but thought I’d ask it here as well. And I’ve asked it before, but I have some new thoughts based upon thinking about McChrystal’s Team of Teams. Do we use a Dual Operating System (Dual OS), with hierarchy being used as a base to pull out teams for innovation, or do we go with a fully podular model?

In a Dual OS org, the hierarchy continues to exist for doing the work that is known that needs to be done. Kotter pulls out select members to create teams to attack particular innovation elements.  These teams change over time, so people are cycled back to work and new folks are infused with the innovation approach.

My question here is whether this really creates an entire culture of innovation. In both Keith Sawyer’s Group Genius and Stephen Johnson’s Where Do Good Ideas Come From, real innovation bubbles along, requiring time and serendipity. You can get innovative solutions for known problems from teams, but for new insights you need an ongoing environment for ideas to emerge, collide, percolate/incubate/ferment.  How do you get that going across the organization?

On the other hand, looking at the military, there’s a huge personnel development infrastructure that prepares people to be members of the elite teams. Individuals from these teams intermix to get the needed adaptivity, but it’s based upon a fixed foundation. And there are still many hierarchical mechanisms organized to support the elite work.  So is it really a fully teamed approach?

As I write this, it sounds like you do need the Dual OS, and I’m willing to believe it.  My continuing concern again is what fosters the ongoing innovation?  Can you have an innovative hierarchy as well? Can you have a hierarchy with a culture of experimentation, accepting mistakes, etc? How do the small innovations in operating process occur along with the major strategic shifts?  My intuitions go towards creating teams of teams, but completely. I do believe everyone’s capable of innovation, and in the right atmosphere that can happen. I don’t think it’s separate, I believe it has to be intrinsic and ubiquitous.  The question is, what structure achieves this?  And I haven’t seen the answer yet.  Have you?  Perhaps we still have some experimentation to do ;).

16 August 2017

3 E’s of Learning: why Engagement

Clark @ 8:07 AM

Letter EWhen you’re creating learning experiences, you want to worry about the outcomes, but there’s more to it than that.  I think there are 3 major components for learning as a practical matter, and I lump these under the E’s: Effectiveness, Efficiency, & Engagement. The latter may be more of a stretch, but I’ll make the case .

When you typically talk about learning, you talk about two goals: retention over time, and transfer to all appropriate (and no inappropriate) situations.  That’s learning effectiveness: it’s about ensuring that you achieve the outcomes you need.  To test retention and transfer, you have to measure more than performance at the end of the learning experience. (That is, unless your experience definition naturally includes this feedback as well.) Let alone just asking learners if they thought it was valuable.  You have to see if the learning has persisted later, and is being used as needed.

However, you don’t have unlimited resources to do this, you need to balance your investment in creating the experience with the impact on the individual and/or organization.  That’s efficiency. The investment is rewarded with a multiplier on the cost.  This is just good business.

Let’s be clear: investing without evaluating the impact is an act of faith that isn’t scrutable.  Similarly, achieving the outcome at an inappropriate expense isn’t sustainable.  Ultimately, you need to achieve reasonable changes to behavior under a viable expenditure.

A few of us have noticed problems sufficient to advocate quality in what we do.  While things may be trending upward (fingers crossed), I think there’s still ways to go when we’re still hearing about ‘rapid’ elearning instead of ‘outcomes’.  And I’ve argued that the necessary changes produce a cost differential that is marginal, and yet yields outcomes more than marginal.   There’s an obvious case for effectiveness and efficiency.

But why engagement? Is that necessary? People tout it as desirable. To be fair, most of the time they’re talking about design aesthetics, media embellishment, and even ‘gamification‘ instead of intrinsic engagement.  And I will maintain that there’s a lot more possible. There’s an open question, however: is it worth it?

My answer is yes. Tapping into intrinsic interest has several upsides that are worth the effort.  The good news is that you likely don’t need to achieve a situation where people are willing to pay money to attend your learning. Instead, you have the resources on hand to make this happen.

So, if you make your learning – and here in particular I mean your introductions, examples, and practice – engaging, you’re addressing motivation, anxiety, and potentially optimizing the learning experience.

  • If your introduction helps learners connect to their own desires to be an agent of good, you’re increasing the likelihood that they’ll persist and that the learning will ‘stick’.
  • If your examples are stories that illustrate situations the learner recognizes as important, and unpack the thinking that led to success, you’re increasing their comprehension and their knowledge.
  • Most importantly, if your practice tasks are situated in contexts that are meaningful to learners both because they’re real and important, you’ll be developing their skills in ways closest to how they’ll perform.  And if the challenge in the progression of tasks is right, you’ll also accelerate them at the optimal speed (and increase engagement).

Engagement is a fine-tuning, and learner’s opinions on the experience aren’t the most important thing.  Instead, the improvement in learning outcomes is the rationale.  It takes some understanding and practice to get systematically good at doing this. Further, you can make learning engaging, it is an acquired capability.

So, is your learning engaging intrinsic interest, and making the learning persist? It’s an approach that affects effectiveness in a big way and efficiency in a small way. And that’s the way you want to go, right? Engage!

15 August 2017

Innovative Work Spaces

Clark @ 8:09 AM

working togetherI recently read that Apple’s new office plan is receiving bad press. This surprises me, given that Apple usually has their handle on the latest in ideas.  Yet, upon investigation, it’s clear that they appear to not be particularly innovative in their approach to work spaces.  Here’s why.

The report I saw says that Apple is intending to use an open office plan. This is where all the tables are out in the open, or at best there are cubicles. The perceived benefits are open communication.  And this is plausible when folks like Stan McChrystal in Team of Teams are arguing for ‘radical transparency’.  The thought is that everyone will know what’s going on and it will streamline communication. Coupled with delegation, this should yield innovation, at the expense of some efficiency.

However, research hasn’t backed that up. Open space office plans can even drive folks away, as Apple’s hearing. When you want to engage with your colleagues and stay on top of what they’re doing, it’s good.  However, the lack of privacy means folks can’t focus when they’re doing heavy mental work. While it sounds good in theory, it doesn’t work in practice.

When I was keynoting at the Learning@Work conference in Sydney back in 2015, a major topic was about flexible work spaces. The concept here is to have a mix of office types: some open plan, some private offices, some small conference rooms. The view is that you take the type of space you need when you need it. Nothing’s fixed, so you travel with your laptop from place to place, but you can have the type of environment you need. Time alone, time with colleagues, time collaborating. And this was being touted both on principled and practical grounds with positive outcomes.

(Note that in McChrystal’s view, you needed to break down silos. He would strategically insert a person from one area with others, and have representatives engaged around all activities.  So even in the open space you’d want people mixed up, but most folks still tend to put groups together. Which undermines the principle.)

As Jay Cross let us know in his landmark Informal Learningeven the design of workspaces can facilitate innovation. Jay cited practices like having informal spaces to converse, and putting the mail room and coffee room together to facilitate casual conversation.  Where you work matters as well as how, and open plan has upsides but also downsides that can be mitigated.

Innovation is about culture, practices, beliefs, and  technology.  Putting it all together in a practical approach takes time and knowledge to figure out where to start, and how to scale.  As Sutton and Rao tell us, it’s a ground war, but the benefits are not just desirable, but increasingly necessary. Innovation is the key to transcending survival to thrival. Are you ready to (Qu)innovate?

9 August 2017

Simulations versus games

Clark @ 8:04 AM

At the recent Realities 360 conference, I saw some confusion about the difference between a simulation and a game. And while I made some important distinctions in my book on the topic, I realize that it’s possible that it’s time to revisit them. So here I’m talking about some conceptual discriminations that I think are important.

Simulations

As I’ve mentioned, simulations are models of the world. They capture certain relationships we believe to be true about the world. (For that matter, they can represent worlds that aren’t real, certainly the case in games.). They don’t (can’t) capture all the world, but a segment we feel it is important to model. We tend to validate these models by testing them to see if they behave like our real world.  You can also think about simulations as being in a ‘state’ (set of values in variables), and move to others by rules.  Frequently, we include some variability in these models, just as is reflected in the real world. Similarly, these simulations can model considerable complexity.

Such simulations are built out of sets of variables that represent the state of the world, and rules that represent the relationships present. There are several ways things change. Some variables can be changed by rules that act on the basis of time (while countdown timer = on, countdown = countdown -1). Variables can also interact (if countdown=0: if 1 g adamantium and 1 g dilithium, Temperature = Temperature +1000, adamantium = adamantium – 1g, dilithium = dilithium – 1g).  Other changes are based upon learner actions (if learner flips the switch, countdown timer = on).

Note that you may already have a simulation. In business, there may already exist a model of particular processes, particularly if they’re proprietary systems.

From a learning point of view, simulations allow motivated and self-effective learners to explore the relationships they need to understand. However, we can’t always assume motivated and self-effective learners. So we need some additional work to turn a simulation into a learning experience.

Scenarios

One effective way to leverage simulations is to choose an initial state (or ‘space of states’, a start point with some variation), and a state (or set) that constitutes ‘win’. We also typically have states that also represent ‘fail’.  We choose those states so that the learner can’t get to ‘win’ without understanding the necessary relationships.   The learner can try and fail until they discover the necessary relationships.  These start and goal states serve as scaffolding for the learning process.  I call these simulations with start and stop states ‘scenarios’.

This is somewhat complicated by the existence of ‘branching scenarios’. There are initial and goal states and learner actions, but they are not represented by variable and rules. The relationships in branching scenarios are implicit in the links instead of explicit in the variables and rules. And they’re easier to build!  Still, they don’t have the variability that typically is possible in a simulation. There’s an inflection point (qualitative, not quantitative) where the complexity of controlling the branches renders it more sensible to model the world as a simulation rather than track all the branches.

Games

The problem here is that too often people will build a simulation and call it a game. I once reviewed a journal submission about a ‘game’ where the authors admitted that players thought it was boring. Sorry, then it’s not a game!  The difference between a simulation and a game is a subjective experience of engagement on the part of the player.

So how do you get from a simulation to a game?  It’s about tuning.  It’s about adjusting the frequency of events, and their consequences, such that the challenge moves to fall into the zone between boring and frustrating. Now, for learning, you can’t change the fundamental relationships you’re modeling, but you can adjust items like how quickly events occur, and the importance of being correct. And it takes testing and refinement. Will Wright, a game designers’ game designer, once proposed that tuning is 9/10’s of the work!  Now that’s for a commercial game, but it gives you and idea.

You can also use gamification, scores to add competition, but, please, only after you first expend the effort to make the game intrinsically interesting. Tap into why they should care about the experience, and bake that it.

Is it worth it to actually expend effort to make the experience engaging?  I believe that the answer is yes. Perhaps not to the level of a game people will pay $60 to play, but some effort to manifest the innate meaningfulness is worth it. Games minimize the time to obtain competency because they optimize the challenge.  You will have sticks as well as carrots, so you don’t need to put in $M budgets, but do tune until your learners have an engaging and effective experience.

So, does this help? What questions do you still have?

8 August 2017

L&D Tuneup

Clark @ 8:00 AM

auto engineIn my youth, owing to my father’s tutelage and my desire for wheels, I learned how to work on cars. While not the master he was, I could rebuild a carburetor, gap points and sparkplugs, as well as adjust the timing. In short, I could do a tuneup on the car.  And I think that’s what Learning & Development (L&D) needs, a tuneup.

Cars have changed, and my mechanic skills are no longer relevant. What used to be done mechanically – adjusting to altitude, adapting through the stages of the engine warming up, and handling acceleration requests – are now done electronically. The air-fuel mixture and the spark advance are under the control of the fuel injection and electronic ignition systems (respectively) now.  With numerous sensors, we can optimize fuel efficiency and performance.

And that’s the thing: L&D is too often still operating in the old, mechanical, model. We have the view of a hierarchical model where a few plan and prepare and train folks to execute. We stick with face-to-face training or maybe elearning, putting everything in the head, when science shows that we often function better from information in the world or even in other people’s heads!  And this old approach no longer works.

As has been noted broadly and frequently, the world is changing faster and the pressure is on organizations to adapt more quickly. With widely disparate paths  pointing in the same direction, it’s easy to see that there’s something fundamental going on. In short, we need to move, as Jon Husband puts it, from hierarchy to wirearchy.  We need agility: experimentation, review, and reflection, iteratively and collectively. And in that move, there’s a central role for L&D.

The move may not be imminent, but it is unavoidable. Even staid and secure organizations are facing the consequences of increasing rates of change and new technology innovations. AI, networks, 3D printing, there are ramifications. Even traditional government agencies are facing change. Yet, this is all about people and learning.

As Harold Jarche tells us, work is learning and learning is the work. That means learning is moving from the classroom to the workplace and on the go. L&D needs a modern workplace learning approach, as Jane Hart lets us know. This new model is one where L&D moves from fount of knowledge to learning facilitator (or advisor, as she terms it).  People need to develop those communication and collaboration, but it won’t come from classes, but from coaching and more.

And, to return to the metaphor, I view this as an L&D tuneup. It’s not about throwing out what you’re doing (unless that’s the fastest path ;), but instead augmenting it. Shifts don’t happen overnight, but instead it means taking on some internal changes, and then working that outwards with stakeholders, reengineering the organizational relationships. It’s a journey, not an event. But like with a tuneup, it’s about figuring out what your new model should be, and then adjusting until you achieve it. It’s over a more extended period of time, but it’s still a tuning operation. You have to work through the stages to a new revolutionary way of working. So, are you ready for a tuneup?

3 August 2017

My policies

Clark @ 8:04 AM

Like most of you, I get a lot of requests for a lot of things. Too many, really. So I’ve had to put in policies to be able to cope.  I like to provide a response (I feel it’s important to communicate the underlying rationale), so I have stock blurbs that I cut and paste (with an occasional edit for a specific context).  I don’t want to repeat them here, but instead I want to be clear about why certain types of actions are going to get certain types of response. Consider this a public service announcement.

So, I get a lot of requests to link on LinkedIn, and I’m happy to, with a caveat. First, you should have some clear relationship to learning technology. Or be willing to explain why you want to link. I use LinkedIn for business connections, so I’m linked to lots of people I don’t even know, but they’re in our field.

I ask those not in learntech why they want to link. Some do respond, and often have a real reason (shifting to this field, their title masks a real role), and I’m glad I asked.  Other times it’s the ‘Nigerian Prince’ or equivalent. And those will get reported. Recently, it’s new folk who claim they just want to connect to someone with experience. Er, no.  Read this blog, instead. I also have a special message to those in learntech with biz dev/sales/etc roles; I’ll link, but if they pitch me, they’ll get summarily unlinked (and I do).

And I likely won’t link to you on Facebook.  That’s personal. Friends and family. Try LinkedIn instead.

I get lots of emails, particularly from elearning or tech development firms, offering to have a conversation about their services.  I’m sorry, but don’t you realize, with all the time I’ve been in the field, that I have ‘goto’ partners? And I don’t do biz dev: develop contracts and outsource production. As Donald H Taylor so aptly puts it, you haven’t established a sufficient relationship to justify offering me anything.

Then, I get email with announcements of new moves and the like.  Apparently, with an expectation that I’ll blog it.  WTH?  Somehow, people think this blog is for PR.  No, as it says quite clearly at the top of the page, this is for my learnings about learning.  I let them know that I pay attention to what comes through my social media channels, not what comes unsolicited.  I also ask what list they got my name from, so I can squelch it. And sometimes they have!

I used to get a lot of offers to either receive or write blog posts. (This had died down, but has resurrected recently.)  For marketing links, obviously. I don’t want your posts; see the above: my learnings!  And I won’t write for you for free. Hey, that’s a service.  See below.

And I get calls with folks offering me a place at their event.  They’re pretty easy to detect: they ask about would I like to have access to a specific audience,…  I have learned to quickly ask if it’s a pay to play.  It always is, and I have to explain that that’s not how I market myself.  Maybe I’m wrong, but I see that working for big firms with trained sales folks, not me. I already have my marketing channels. And I speak and write as a service!

I similarly get a lot of emails that let me know about a new product and invite me to view it and give my opinion.  NO!  First, I could spend my whole day with these. Second, and more importantly, my opinion is valuable!  It’s the basis of 35+ years of work at the cutting edge of learning and technology. And you want it for free?  As if.  Let’s talk some real evaluation, as an engagement.  I’ve done that, and can for you.

As I’ve explained many times, my principles are simple: I talk ideas for free; I help someone personally for drinks/dinner; if someone’s making a quid, I get a cut.  And everyone seems fine with that, once I explain it. I occasionally get taken advantage of, but I try to make it only once for each way (fool me…).   But the number of people who seem to think that I should speak/write/consult for free continues to boggle my mind.  Exposure?  I think you’re overvaluing your platform.

Look, I think there’s sufficient evidence that I’m very good at what I do. If you want to refine your learning design processes, take your L&D strategy into the 21st century, and generally align what you do with how we think, work, and learn, let’s talk.  Let’s see if there’s a viable benefit to you that’s a fair return for me. Lots of folks have found that to be the case.  I’ll even offer the first conversation free, but let’s make sure there’s a clear two-way relationship on the table and explore it.  Fair enough?

 

 

2 August 2017

Ethics and AI

Clark @ 8:03 AM

I had the opportunity to attend a special event pondering the ethical issues that surround Artificial Intelligence (AI).  Hosted by the Institute for the Future, we gathered in groups beforehand to generate questions that were used in a subsequent session. Vint Cerf, co-developer of the TCP/IP protocol that enabled the internet, currently at Google, responded to the questions.  Quite the heady experience!

The questions were quite varied. Our group looked at Values and Responsibilities. I asked whether that was for the developers or the AI itself. Our conclusion was that it had to be the developers first. We also considered what else has been done in technology ethics (e.g. diseases, nuclear weapons), and what is unique to AI.  A respondent mentioned an EU initiative to register all internet AIs; I didn’t have the chance to ask about policing and consequences.  Those strike me as concomitant issues!

One of the unique areas was ‘agency’, the ability for AI to act.  This led to a discussion for a need to have oversight on AI decisions. However, I suggested that humans, if the AI was mostly right, would fatigue. So we pondered: could an AI monitor another AI?  I also thought that there’s evidence that consciousness is emergent, and so we’d need to keep the AIs from communicating. It was pointed out that the genie is already out of the bottle, with chatbots online. Vint suggests that our brain is layered pattern-matchers, so maybe consciousness is just the topmost layer.

One recourse is transparency, but it needs to be rigorous. Blockchain’s distributed transparency could be a model. Of course, one of the problems is that we can’t even explain our own cognition in all instances (we make stories that don’t always correlate with the evidence of what we do). And with machine learning, we may be making stories about what the system is using to analyze behaviors and make decisions, but it may not correlate.

Similarly, machine learning is very dependent on the training set. If we don’t pick the right inputs, we might miss some factors that would be important to incorporate in making answers.  Even if we have the right inputs, but don’t have a good training set of good and bad outcomes, we get biased decisions. It’s been said that what people are good at is crossing the silos, whereas the machines tend to be good in narrow domains. This is another argument for oversight.

The notion of agency also brought up the issue of decisions.  Vint inquired why we were so lazy in making decisions. He argued that we’re making systems we no longer understand!  I didn’t get the chance to answer that decision-making is cognitively taxing.  As a consequence, we often work to avoid it.  Moreover, some of us are interested in X, so are willing to invest the effort to learn it, while others are interested in Y. So it may not be reasonable to expect everyone to invest in every decision.  Also, our lives get more complex; when I grew up, you just had phone and TV, now you need to worry about internet, and cable, and mobile carriers, and smart homes, and…  So it’s not hard to see why we want to abrogate responsibility when we can!  But when can we, and when do we need to be careful?

Of course, one of the issues is about AI taking jobs.  Cerf stated that nnovation takes jobs, and generates jobs as well. However, the problem is that those who lose the jobs aren’t necessarily capable of taking the new ones.  Which brought up an increasing need for learning to learn, as the key ability for people. Which I support, of course.

The overall problem is that there isn’t a central agreement on what ethics a system should embody, if we could do it. We currently have different cultures with different values. Could we find agreement when some might have different view of what, say, acceptable surveillance would be? Is there some core set of values that are required for a society to ‘get along’?  However, that might vary by society.

At the end, there were two takeaways.  For one, the question is whether AI can helps us help ourselves!  And the recommendation is that we should continue to reflect and share our thoughts. This is my contribution.

Next Page »

Powered by WordPress