Learnlets

Secondary

Clark Quinn’s Learnings about Learning

2018 Trajectories

3 January 2018 by Clark Leave a Comment

Given my reflections on the past year, it’s worth thinking about the implications.  What trajectories can we expect if the trends are extended?  These are  not predictions (as has been said, “never predict anything, particularly the future”).  Instead, these are musings, and perhaps wishes for what could (even  should) occur.

I mentioned an interest in AR and VR.  I think these are definitely on the upswing. VR may be on a rebound from some early hype (certainly ‘virtual worlds’), but AR is still in the offing.  And the tools are becoming more usable and affordable, which typically presages uptake.

I think the excitement about AI will continue, but I reckon we’re already seeing a bit of a backlash. I think that’s fair enough. And I’m seeing more talk about Intelligence Augmentation, and I think that’s a perspective we continue to need. Informed, of course, by a true understanding of how we think, work, and learn.  We need to design to work  with us.  Effectively.

Fortunately, I think there are signs we might see more rationality in L&D overall. Certainly we’re seeing lots of people talking about the need for improvement. I see more interest in evaluation, which is also a good step. In fact, I believe it’s a good  first step!

I hope it goes further, of course. The cognitive perspective suggests everything from training & performance support, through facilitating communication and collaboration, to culture.  There are many facets that can be fine-tuned to optimize outcomes.Similarly, I hope to see a continuing improvement in learning engineering. That’s part of the reason for the Manifesto and the Quinnov 8.  How it emerges, however, is less important than that it  does.  Our learners, and our organizations, deserve nothing less.

Thus, the integration of cognitive science into the design of performance and innovation solutions will continue to be my theme.  When you’re ready to take steps in this direction, I’m happy to help. Let me know; that’s what I do!

Video Lessons

15 December 2017 by Clark Leave a Comment

So, I’ve been creating a ‘deeper elearning’ course for one of the video course providers. And I’m not mentioning where it is (yet), since it’s still under development.  But to do this, I had to do some serious learning about creating video.  And there were some realizations in this, of course.

One of the decisions to be made was how to include graphics.. My mentor/colleague/friend showed me (by video chat) his elegant setup.  He has green screens, and lights, and has a full studio in a separate room as well. Of course, he’s been doing video for decades.  I’ve hardly done much besides taking a multimedia course at least 20 years ago. And narrating the occasional Keynote deck.

In the meantime I asked around, and colleagues were pretty unanimous on ScreenFlow being  the tool to use.  So I got a copy. And, indeed, I was able to film myself.  Moreover, I quickly found out I could include diagrams and text right on the screen! That eliminated the need for a green screen.

My video imageI had a couple of lights, and without them my screen reflected on my glasses.  However, that’s not really fixable, since I didn’t get the anti-glare coating when I had them made.  Doh!  Next time, for sure. I positioned a couple of lights off to each side, and they reduced (though not eliminated) the glare.

We were moving my office back to the front of the house (long story), so we moved a bookcase behind me, with my library.  It looks good, but…you don’t see much of it anyway.  I filmed standing up (on my new stand/sit desk converter), and I block most of the background anyway (except for the Albert Einstein poster that sits on the wall).

Having read up,  I knew to have a written script, which, without a prompter, I just positioned to the top of the screen under the camera.  Of course I changed it a bit, and adlibbed a bit, but mostly stuck to what I’d written. It’s not quite as spontaneous (and goofy) as I am in person, but it ensures consistent quality. And I filled in diagrams a few times, and added some text a few times, to help keep pace.

Frankly, it’s not great, but I had a deadline.  It’s too much of me talking, without animation. But this is done by me, alone, under a tight deadline. And that’s my error, too, since I have video anxiety almost as bad as my phone anxiety, and dragged my heels until things were too late.  Dang emotions getting in the way again! (Even when you know this.)

I also created some quizzes, in mini-scenario fashion pretty much. That is, there’s a fair bit of dialog that you either are asked and/or choose to respond with. Because it’s only a multiple choice option, I was somewhat constrained.  I subsequently was prodded for some assignments, and found I could do what I’d talked about.  I used the assignment tool to create questions that asked learners to go out and do things and then provide them with some guidance to self-evaluate.

One thing I learned is that I don’t have a good mental model of how the software works. I ‘get’ the tracks, but there’s another aspect I don’t understand. So, it turns out though I’d filmed myself at 720p, and exported at 720p, it still had an unnecessary border. Fortunately, in stumbling around I found a ‘crop’ setting that forced it to 1280 x 720 (720p), but I don’t understand  why that was necessary!?!?

I still want to add some examples (as documents) before I feel it’s fully ready to go. And I now sympathize much more with those who struggle to do good learning design under real-world constraints.  It’s also certainly been an example of my accepting assignments that are within my reach, but not within my grasp; my learning style ;).   More later, but thought I’d share my struggles and learning. I welcome your feedback.

Usability and Networks

5 December 2017 by Clark Leave a Comment

As I mentioned in an earlier post, I have been using Safari and Google to traverse the networks. And in a comment, I mentioned that the recent launch of the new Firefox browser was prompting me to switch.  And that’s now been put through a test, and I thought it instructive to share my learnings.

The rationale for the switch is that I don’t completely trust Google and Apple with my data. Or anyone, really, for that matter.  On principle. I had used Safari over Chrome because I trust Apple a wee bit more, and Firefox was a bit slow.  And Safari just released a version that stops videos from auto-starting. And similarly, Google’s search has been the best, and with a browser extension and some adjustments, I was getting ads blocked, tracking stopped, and more.  Still, I wasn’t happy.  And I hadn’t figured out how to do an image search with DuckDuckGo (something I do a fair bit) the last time I tried, so that hadn’t been a search option.

All this changed with the release of Firefox’s new Quantum browser. After a trial spin, the speed was good, as was the whole experience.  Now, I want to have an integrated experience  across my devices, so I downloaded the Firefox versions for my iDevices as well.  And, as long as I was changing, I tried DuckDuckGo again, and found it did have browser search.    So I made it my search engine as well.

And, after about a week of experience, I’m not sticking with Firefox.  The desktop version is all I want, but the iDevice versions don’t cut it. I use my toolbar bookmarks a  lot.  Many times a day.  And on the iDevices, they  do synch, but…they’re buried behind four extra clicks. And that’s just not acceptable.  The user experience kills it for me. Those versions  also don’t take advantage of the revised code behind the new desktop version, but it wasn’t the speed that killed the deal.  The point I want to make is that you have to look at the total experience, not just one or another in isolation. It’s time for an ecosystem perspective.

On the other hand, I’m still trying DuckDuckGo.  It seems to have a good output on it’s hits.  And the fact that they’re not tracking me is important.  If I can avoid it, I will.  Sure, my ISP still can track me, and so can Apple, but I’ll keep working on those.  Oddly, it seems to return differently on different devices (?!?!).  Still testing.

And, as long as we’re talking the net, I’m going to do something I don’t usually do here; I’m going to take a position on something besides learning. To do so, let me provide some context. I’ve been on the net since before there was a web.  Way before.  Circa 1978, I was able to send and receive email even though there  wasn’t any internet. I was at a uni with ARPANET, however, so I had a taste. Roll forward a decade and more, and I was playing with Gopher and WAIS and USENET before Tim Berners-Lee had created http.  That is, there were other protocols that preceded it. (In fact, I was blasé about the web at first, because of that; doh!)  My point is that I’ve been leveraging the benefits of networks for a bloody long time.

And now we depend on it. The internet is the basis for elearning! And, of course, so much more.  It has vastly accelerated our ability to interact. And while that’s created problems, it’s also enabled incredible benefits.  Innovation flourishes when there are open standards.  When people can build upon a solid and open foundation, creativity means new opportunity.  Network effects are true for people and for data.

Which is why I’m firmly in the camp for net neutrality.  This is important!  (It must be, because I used bold, which I almost  never do ;). The alternative, where providers will be able to throttle or even bar certain types of data will stifle innovation.  It’s like plumbing, telephone, and electricity: they need to be available as long as you can pay your bill (and there need to be options to support those with limited incomes).  Please,  please,  please  let your elected representatives and the FCC know that this is important to you.

 

My Professional Learner’s Toolkit

21 November 2017 by Clark 7 Comments

My colleague, Harold Jarche, recently posted about his professional learning toolkit, reflecting our colleague Jane Hart’s post about a Modern Learner’s Toolkit. It’s a different cut through the top 10 tools.  So I thought I’d share mine, and my reflections.

Favorite browser and search engine: I use Safari and Google, by default. Of course, I keep Chrome and Firefox around for when something doesn’t work (e.g. Qualtrics).  I would prefer another search engine, probably DuckDuckGo, but I’m not facile with it, for instance finding images.

A set of trusted web resources: That’d be Wikipedia, pretty much. And online magazines, such as eLearnMag and Learning Solutions, and ones for my personal interests. I use Pixabay many times to find images.

A number of news and curation tools: I use Google News and the ABC (Oz, not US) in my browser, and the BBC and News apps on my iDevices. I also use Feedblitz to bring blogposts into my email.  I keep my own bookmarks using my browser.

Favorite web course platforms: I haven’t really taken online courses. I’ve used Zoom to share.

A range of social networks: I use LinkedIn professionally, as well as Slack. And Twitter, of course.  I stay in touch with my ITA colleagues via Skype.  Facebook is largely personal.

A personal information system: I use both Notability and Notes to take notes.  Notes more for personal stuff, Notability for work-related. I use Omnigraffle for diagrams and mindmaps.  And OmniOutliner also helps when I want to think hierarchically.

A blogging or website tool: I use WordPress for Learnlets (i.e. here), and I use Rapidweaver for my sites: Quinnovation and my book sites.

A variety of productivity apps and tools: Calendar is crucial, and Pagico keeps me on track for projects. I use Google Maps for navigation. I use SplashID for passwords and other private data. I often read and markup documents on my iPad with GoodReader. CloudClip lets me share a multi-item clipboard across my devices.    Reflection: this overlaps with the personal information system.

A preferred office suite: I don’t have a preferred suite, though I’d like to use the Apple Suite. I use Word to write (Pages hasn’t had industrial-strength outlining), Keynote to create presentations (e.g. one from each suite). I don’t create sheets often.

A range of  communication and collaboration tools: I use Google Drive to collaborate on representations.  I have used Dropbox to share documents as well. And of course Mail for email.   Reflection: this overlaps with social networks.

1 or more smart devices: I’d be lost without my iPhone and iPad (neither of which is the latest model). I use the phone for ‘in the moment’ things, the iPad for when I have longer time frames.

So, that’s my toolkit, what’s yours?

Jane's toolkit diagram

Rules for AI

2 November 2017 by Clark Leave a Comment

After my presentation in Shanghai on AI for L&D, there were a number of conversations that ensued, and led to some reflections. I’m boiling them down here to a few rules that seem to make sense going forward.

  1. Don’t worry about AI overlords.  At least, not yet ;).  Rodney Brooks wrote a really nice  article talking about why we might be fearing AI, and why we shouldn’t. In it, he cited Amara’s Law: we tend to overestimate technology in the short-term, and underestimate the impact in the long term. I think we’re in the short-term of AI, and while it’s easy to extrapolate from smart behavior in a limited domain to similar behavior in another (and sensible for humans), it turns out to be hard to get computers to do so.
  2. Do be concerned about how AI is being used. AI  can  be used for ill or good, and we should be concerned about the human impact.  I realize that a focus on short-term returns might suggest replacing people when possible. And anything rote enough possibly  should be replaced, since it’s a sad use of human ability.  Still, there are strong reasons to consider the impact on the people being affected, not least humanitarian, but also practical. Which leads to:
  3. Don’t have AI without human oversight  (at least in most cases).  As stated above in 1, AI doesn’t generalize well.  While it can be trained to work within the scope you describe, it will suffer at the boundary conditions, and any ambiguous or unique situations. It may well make a better judgment in those cases, but it also may not. In most cases, it will be best to have an external review process for all decisions being made, or at least ones at the periphery. Because:
  4. Your AI is only as good as it’s data set and/or it’s algorithms. Much of machine learning essentially runs on historical datasets. And historical datasets can have historical biases in them.  For instance, if you were to look at building a career counselor based upon what’s been done in many examples across schools, you might find that women were being steered away from math-intensive careers. Similarly, if you’re using a mismatched algorithm (as happens often in statistics, for example), you could be biasing your results.
  5. Design as if AI means Augmented Intelligence, not Artificial Intelligence (perhaps an extension of 3). There are things humans do well, and things that computers do well. AI is an attempt to address the intersection, but if our goal is (as it should be) to get the best outcome, it’s likely to be a hybrid of the two. Yes, automate what can and should be automated, but first consider what the best total solution would be, and then if it’s ok to just use the AI do so. But don’t assume so.
  6. AI on top of a bad system is a bad system. This is, perhaps, a corollary to 4, but it goes further. So, for instance, if you create a really intriguing simulated avatar for practicing soft skills, but you’re still not really providing a good model to guide performance, and good examples, you’re either requiring considerable more practice or risking an inappropriate emergent model.  AI is not a panacea, but instead a tool in designing solutions (see 5).  If the rest of the system has flaws, so will the resulting solution.

This is by no means a full set, nor a completely independent one. But it does reflect some principles that emerged from my interactions around some applications and discussions with people. I welcome your extensions, amendments, or even contrary views!

Addressing Changes

25 October 2017 by Clark Leave a Comment

Yesterday, I listed some of the major changes that L&D needs to acknowledge. What we need now is to look at the top steps that need to be taken.  As serious practitioners in a potentially valuable field, we need to adapt to the changing environment as much as we need to assist our charges to do so. So what’s involved?

We need to get a grasp on technology  affordances. We don’t need to that the latest technology exists, whether AI, AR, or VR.  Instead, we have to understand what they mean  in the context of our brains.  What key capabilities are brought?  Can VR go beyond entertainment to help us learn better? How can AI partner with us?  If we can make practical use of AR, what would we do with it?

In conjunction, we need to  understand the realities about us.  We need to take ownership and have a suitable background in how people  really think, work, and learn. Further, we need to recognize that they’re all tied together, not separate things. So, for instance, we learn as we work, we think as we learn, etc.

For example, we need to understand situated and distributed cognition. That is, we need to grasp that we’re not formal logical thinkers, but instead very context dependent, and that our thinking is across our tools. As a consequence, we need to design solutions that recognize our individual situations, and leverage technology as an augment. So we want to design human/computer system solutions to problems, not just human or system solutions.

We also need to understand cultural elements. We work better when we are given meaningful work, freedom to pursue those goals, and get the necessary support to succeed. This is  not micromanagement, but instead, is leadership and coaching. We also need an environment where it’s safe, expected even, to experiment and even to make mistakes.

We also need to understand that we work better (read: produce better results), when we work together in particular ways. Where we understand that we should allow individual thought first, but then pool those ideas. And we need to show our work and the underlying thinking. Moreover, again, it has to be safe to do so!

And, these are all tied together into a systemic approach!  It can’t be piecemeal, because working together and out loud can’t be divorced from the technology used to enable these capabilities. And giving people meaningful work and not letting them work together, or vice-versa, just won’t achieve the necessary critical mass.

Finally, we also need to do this in alignment with the business. And, lets be clear, in ways that can be measured!  We need to be understanding what are the critical performance needs of the organization, and demonstrate that we’re impacting them in the ways above.

This can be done, and it will be the hallmark of successful organization. We’re already seeing a wide variety of converging evidence that these changes lead to success. The question is, are you going to lead your organization forward into the future, or keep your head down and do what you’ve always done?

Acknowledging Changes

24 October 2017 by Clark 1 Comment

There are a serious number of changes that are affecting organizations.  We’re seeing changes in the information flow, in technology, and in what we know about ourselves. Importantly, these are things that L&D needs to acknowledge and respond to.  What are these changes?

It’s old news that things are happening faster. We’re being overwhelmed with information, and that rate is accelerating. On the other hand, our tools to manage the information flow are also advancing.

Which is the second topic. We’re getting more powerful technology. We can create systems that do tasks that used to be limited to humans. They can also partner with us, providing information based upon who we are, what we’re doing, and what else is going on.

And there are increasing demands for accountability (and transparency). Your actions should be justified. What are you doing, why, and what effect is it having? If you can’t answer these questions, you’re going to be looking for a job.

Most importantly, we’ve learned quite a bit about ourselves that is contrary to many pre-existing beliefs. Specifically ones that influence organizational approaches.  Our myths about how we think, work, and learn are holding us back from achieving optimal outcomes.

For one, there’s a persistent belief that our thinking is in our heads.  Yet research shows that our thinking is distributed across our tools. We use external representations to capture at least part of our thinking, and access information that we can’t keep in our heads effectively.  Yet we seem to depend on courses to put it in the head instead of tools to put it in the world.

Our thinking is also distributed across others. “You’re no longer what you know, but who you know” is a new mantra. So is “the room is smarter than the smartest person in the room” (with the caveat: if you manage the process right ;). Informal and social learning is the work.  Yet we still act as if we believe that people should solve problems independently.

And we also act as if how we learn is by information dump.  Add a quiz, so we know they can recognize the right answer if they see it, and they’ve learned!  Er, no. Science tells us that this is perhaps the worst thing we could do to facilitate learning.

In short, our practices are out of date. We’re using patch-it (or ignore-it) solutions to systemic issues.  We address simple things as if they’re not all connected. It’s time to get on top of what’s known, and then act accordingly.  Are you ready to join the 21st century?

Organizational terms

26 September 2017 by Clark Leave a Comment

Listening to a talk last week led me to ponder the different terms for what it is I lobby for.  The goal is to make organizations accomplish their goals, and to continue to be able to do so.  In the course of my inquiry, I explored and uncovered several different ‘organizational’ terms.  I thought I should lay them out here for my (and your) thoughts.

For one, it seemed to be about organizational  effectiveness. That is, the goal is to make organizations not just efficient, but capable of optimal levels of performance.  When you look at the Wikipedia definition, you find that they’re about “achieving the outcomes the organization intends to produce”.  They do this through alignment, increasing tradeoffs, and facilitating capacity building.  The definition also discusses improvements in decision making, learning, group work, and tapping into the strictures of self-organizing and adaptive systems, all of which sound right.

Interesting, most of the discussion seems to focus on not-for-profit organizations. While I agree on their importance, and have done considerable work with such organizations, I guess I’d like to see a broader focus. Also, and this is purely my subjective opinion, the newer thoughts seem grafted on, and the core still seems to be about producing good numbers. Any time you use the phrase ‘human capital’, I am leery.

Organizational engineering is a phrase that popped to mind (similar to learning engineering). Here, Wikipedia defines it as an offshoot of org development, with a focus on information processing. And, coming from cognitive psychology, that sounds good, with a caveat.  The reality is, we’re flawed as ideal thinkers. And in the definition it also talks about ‘styles’, which are a problem all on their own. Overall, this appears to be more a proprietary suite of approaches under a label. While it uses nice sounding terms, the reality (again, my inferences here) is that it may be designed for an audience that doesn’t exist.

The final candidate is organizational development. Here the definition touts “implementing effective change”. The field is defined as interdisciplinary and drawing on psych, sociology, and more.  In addition to systems thinking and and decision-making, there’s an emphasis on organizational learning and on coaching, so it appears more human-focused. The core values also talk about human beings being valued for themselves, not as resources, and looking at the complex picture.  Overall this approach resonates with me more, not just philosophically, but pragmatically.

As I look at what’s emerging from the scientific study of people and organizations, as summed up in a variety of books I’ve touted here, there are some very clear  lessons. For, one, people respond when you treat the as meaningful parts of a worthwhile endeavor. When you value people’s input and trust them to apply their talents to the goals, things get done. Caring enough to develop them in ways that are supportive, not punitive, and not just your goals but theirs’ too, retains their interest and commitment. And when you provide them with an environment to succeed and improve, you get the best organizational outcomes.

There’s more about how to get started.  Small steps, such as working in a small group (*cough* L&D? *cough* ;), and developing the practices and the infrastructure, then spreading, has been shown to be better than a top-down initiative. Experimenting and reviewing the outcomes, and continually tweaking likewise.  Ensuring that it’s coaching, not ‘managing’ (managers are the primary reason people leave companies).  Etc.

All this shouldn’t be a surprise, but it’s not trivial to do but takes persistence.  And, it flies in the face of much of management and HR practices.  I don’t really care what we label it, I just want to find a way to talk about things that makes it easy for people to know what I’m talking about.  There are goals to achieve, so my main question is how do we get there?  Anyone want to get started?

AI Reflections

15 September 2017 by Clark Leave a Comment

Last night I attended a session on “Our Relationship with AI” sponsored by the Computer History Museum and the Partnership on AI. In a panel format, noted journalist John Markoff moderated Apple‘s Tom Gruber, AAAI President Subbarao Kambhampati, and IBM Distinguished Research Scientist Francesca Rossi. The overarching theme was:  how are technologists, engineers, and organizations designing AI tools that enable people and devices to understand and work with each other?

It was an interesting session, with the conversation ranging from what AI is, to what it could and should be used for, and how to develop it in appropriate ways. Addresses were concerns about AI’s capability, roles, and potential misuses.  Here I’m presenting just a couple of thoughts triggered, as I’ve previously riffed on IA (Intelligence Augmentation) and Ethics.

One of the questions that arose was whether AI is engineering or science. The answer, of course, is both. There’s ongoing research on how to get AI to do meaningful things, which is the science part. Here we might see AI that can learn to play video games.  Applying what’s currently known to solve problems is the engineering part, like making chatbots that can answer customer service questions.

On a related note was what  can AI do.  Put very simply, the proposal was that AI could do what you can make a judgment on in a second. So, whether what you see is a face, or whether a claim is likely to be fraudulent.  If you can provide a good (large) training set that says ‘here’s the input, and this is what the output should be’, you can train a system to do it.  Or, in a well-defined domain, you can say ‘here are the logical rules for how to proceed’, and build that system.

The ability to do these tasks, was another point, is what leads to fear. “Wow, they can be better than me at this task, how soon will they be better than me on many tasks?”  The important point made is that these systems can’t generalize beyond their data or rules.  They can’t say: ‘oh I played this video driving game so now I can drive a car’.

Which means that the goal of artificial  general intelligence, that is, a system that can learn and reason about the real world, is still an unknown distance away.  It would either have to have a full set of  knowledge about the world,  or you’d have to have both the capacity and the experience that a human learns from (starting as a baby).  Neither approach has demonstrated any approach of being close.

A side issue was that of the datasets.  It turns out that datasets can have or learn implicit biases. A case study was mentioned how Asian faces triggered ‘blinking’ warnings, owing to the typical eye shape. And this was from an Asian company!  Similarly, word recognition ended up biasing woman towards associations with kitchens and homes, compared to men.  This raises a big issue when it comes to making decisions: could loan-offerings, fraud-detection, or other applications of machine learning inherit bias from datasets?  And if so, how do we address it?

Similarly, one issue was that of trust. When do we trust an AI algorithm?  One suggestions was that it would come through experience (repeatedly seeing benevolent decisions or support).  Which wouldn’t be that unusual. We might also employ techniques that work with humans: authority of the providers, credentials, testimonials, etc. One of my concerns then was could that be misleading: we trust one algorithm, and then transfer that trust (inappropriately) to another?  That wouldn’t be  unknown in human behavior either.  Do we need a whole new set of behaviors around NPCs? (Non Player Characters, a reference to game agents that are programmed, not people.)

One analogy that was raised was to the industrial age. We started replacing people with machines. Did that mean a whole bunch of people were suddenly out of work?  Or did that mean new jobs emerged to be filled?  Or, since we’re now doing human-type tasks, will there be less tasks overall? And if so, what do we do about it?  It clearly should be a conscious decision.

It’s clear that there are business benefits to AI. The real question, and this isn’t unique to AI but happens with all technologies, is how we decide to incorporate the opportunities into our systems. So, what do you think are the issues?

 

Ethics and AI

2 August 2017 by Clark 1 Comment

I had the opportunity to attend a special event pondering the ethical issues that surround Artificial Intelligence (AI).  Hosted by the Institute for the Future, we gathered in groups beforehand to generate questions that were used in a subsequent session. Vint Cerf, co-developer of the TCP/IP protocol that enabled the internet, currently at Google, responded to the questions.  Quite the heady experience!

The questions were quite varied. Our group looked at Values and Responsibilities. I asked whether that was for the developers or the AI itself. Our conclusion was that it had to be the developers first. We also considered what else has been done in technology ethics (e.g. diseases, nuclear weapons), and what is unique to AI.  A respondent mentioned an EU initiative to register all internet AIs; I didn’t have the chance to ask about policing and consequences.  Those strike me as concomitant issues!

One of the unique areas was ‘agency’, the ability for AI to  act.  This led to a discussion for a need to have oversight on AI decisions. However, I suggested that humans, if the AI was mostly right, would fatigue. So we pondered: could an AI monitor another AI?  I also thought that there’s evidence that consciousness is emergent, and so we’d need to keep the AIs from communicating. It was pointed out that the genie is already out of the bottle, with chatbots online. Vint suggests that our brain is layered pattern-matchers, so maybe consciousness is just the topmost layer.

One recourse is transparency, but it needs to be rigorous. Blockchain’s distributed transparency could be a model. Of course, one of the problems is that we can’t even explain our own cognition in all instances (we make stories that don’t always correlate with the evidence of what we do). And with machine learning, we may be making stories about what the system is using to analyze behaviors and make decisions, but it may not correlate.

Similarly, machine learning is very dependent on the training set. If we don’t pick the right inputs, we might miss some factors that would be important to incorporate in making answers.  Even if we have the right inputs, but don’t have a good training set of good and bad outcomes, we get biased decisions. It’s been said that what people are good at is crossing the silos, whereas the machines tend to be good in narrow domains. This is another argument for oversight.

The notion of agency also brought up the issue of decisions.  Vint inquired why we were so lazy in making decisions. He argued that we’re making systems we no longer understand!  I didn’t get the chance to answer that decision-making is cognitively taxing.   As a consequence, we often work to avoid it.  Moreover, some of us are interested in X, so are willing to invest the effort to learn it, while others are interested in Y. So it may not be reasonable to expect everyone to invest in every decision.  Also, our lives get more complex; when I grew up, you just  had phone and TV, now you need to worry about internet, and cable, and mobile carriers, and smart homes, and…  So it’s not hard to see why we want to abrogate responsibility when we can!  But when can we, and when do we need to be careful?

Of course, one of the issues is about AI taking jobs.  Cerf stated that nnovation takes jobs, and generates jobs as well. However, the problem is that those who lose the jobs aren’t necessarily capable of taking the new ones.  Which brought up an increasing need for learning to learn, as the key ability for people. Which I support, of course.

The overall problem is that there isn’t a central agreement on what ethics a system should embody, if we  could do it. We currently have different cultures with different values. Could we find agreement when some might have different view of what, say, acceptable surveillance would be? Is there some core set of values that are required for a society to ‘get along’?  However, that might vary by society.

At the end, there were two takeaways.  For one, the question is whether AI can helps us help ourselves!  And the recommendation is that we should continue to reflect and share our thoughts. This is my contribution.

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok