Thomas Reeves opened the third day of the AECT conference with an engaging keynote that used the value of conation to drive the argument for Authentic Learning. Conation is the component of cognition that consists of your intent to learn, and is under-considered. Authentic learning is very much collaborative problem-solving. He used the challenges from robots/AI to motivate the argument.
Derek Cabrera AECT Keynote Mindmap
Stay Curious
One of my ongoing recommendations to people grew out of a toss-off line, playing off an advertisement. Someone asked about a strategy for continuing to learn (if memory serves), and I quipped “stay curious, my friends”. However, as I ponder it, I think more and more that such an approach is key.
I was thinking of this trend the other day as “intellectual restlessness”. What I’m talking about is being intrigued by things you don’t understand that have persisted or recently crossed your awareness, and pursuing it. It’s not just saying “how interesting”, but recognizing connections, and pondering how it could change what you do. Even to the point of actually changing!
It also would include pointing interesting things to other people who would benefit. This doesn’t always have to happen, but in the spirit of cooperation (in the Jarche sense), we could and should contribute, curate, when we can. And, ideally, leaving trails of your explorations that others can benefit from. Writings, diagrams, videos, what have you, helps yourself as well as others.
I was reminiscing that more than 30 years ago, on top of my job designing educational computer games, I was already curious. I still have copies of the magazines containing reviews I did (one hardware, one software), as well as a journal article based upon undergraduate research I was fortunate to participate in.
And that persistence in curiosity has led to a trail of artefacts. You may have come across the books, book chapters, articles, presentations, etc. And, of course, this blog for the past decade and more. (May it continue!) However, I’m not here to tout my wares, but instead to point to the benefit of being curious.
As things change faster, a continuing interest is what provides an ongoing ability to adapt. All the news about the ongoing changes in jobs and work isn’t likely to lessen. Staying curious benefits you, your colleagues and friends, and I reckon society in general. You want to look at many sources of information, track tangential fields, and be open to new ideas.
This isn’t just your choice, of course, ideally your organization is supportive. These lateral inputs are a component of innovation, as is time to allow for serendipity and incubation. Orgs that want to be able to be agile will need this capabilities as well. I suppose organizations need to stay curious as well!
Mundanities
This post is late, as my life has been a little less reflective, and a little more filled with some mundane issues. There’re some changes here around the Quinnstitute, and they take bandwidth. For a small update on these mundanities with some lessons:
First, I moved office from the side of the house back to the front. My son had occupied it, but he’s settled into an apartment for college, and I prefer the view out to the street (to keep an eye on the neighborhood). Of course, this entailed some changes:
My ergonomic chair stopped working, and it took several days to a) find out someone who’d repair it, b) get it there, wait for it to get fixed, and get it back. It’s worth it (a lot less than replacing) and ergonomics is important.
Speaking of which, I also now could get a standup desk, or in my case one of those convertible desks that lets you raise and lower your workspace. I’ve been wanting one since the research has come out on the problems with sitting. We’d previously constructed a custom desktop (with legs from Ikea!), for the odd shaped room, so it was desirable to just put it on top. So far, so good. Strongly recommended.
Also bought a used bookshelf (rather than move the one from the old office). Real wood, real heavy. Used those ‘forearm forklift’ straps to get it in. They work! And, this being earthquake country, had to strap it to the wall. Still to come: filling with books.
At the same time, fed up with all the companies that provide internet and cable television, we decided to change. (We changed mobile providers back in January.) As I noted previously, companies use policies to their advantage. One of the approaches is that they sell you a two year package, but then there’s no notification that the time’s up and the rate jumps up. And you can’t find just a low rate provider (I don’t even mind if it’s higher than the bonus deal). Everyone uses this practice. Sigh.
As I said, I can’t find anyone better, but just decided to change. That involved conversations, and research, and installation time, and turning off the old systems. At least we’re getting a) a lower rate, b) nicer DVR, and c) faster internet. For the time being. While the new provider promised to ping me before the plan runs out, the old provider says they can’t. See what I mean? Regardless, I’ve got a trigger before it expires to sign up anew. Or change again. That’s the lesson on this one.
And of course there are some conversations about some upcoming presentations. I was away last week presenting, and have one coming up next month (ATD China Summit, if you’re near Shanghai say hello) and several in November at AECT in Jacksonville. You’ve seen some of the AI reflections, more likely to come on the new topics.
And there’s been some background work. Reading a couple of books, and working on two projects. Stay tuned for a couple of new things early next year.
The lesson, of course, is trying to find time to reflect while you’re executing on mundanities is more challenging, but still a valuable investment. I fight to make time, I hope you do too!
Transparency
I believe that transparency is a good thing. It builds trust, as it makes it hard to hide things. And trust is important. So, in the spirit of transparency, it occurred to me to share a little bit about me and this blog. Here I lay out who I am, why I write it, and what I write about.
You can find out more via the ‘about Clark Quinn’ link in the right column, but in brief, I saw the connection between computing and learning as an undergraduate, and it’s been my career ever since. It’s not just my vocation, but it’s my avocation: I enjoy exploring cognition and technology. And while I’ve done the science and track it, what I revel in (and have demonstrable capability for), is applying cognitive and learning science to create new approaches and fine-tune existing ones. Learning engineering, if you will.
And, for a variety of reasons, I do this as a consultant. I make my living providing strategic guidance for clients. I speak at events, and write books, but my main income is from consulting. Which means you should hire me. I assist organizations to improve their processes and products, both tactically and strategically. My clients have been happy, and find it’s good value. What you get are unique ideas that are practical and yet effective. Ideas you aren’t likely to have come up with, but are valuable. I really do Quinnovate! Check out the Quinnovation site for more. Of course, I do have to live in the real world, and so I need to find ways to do this that are mutually beneficial.
Yet generating business isn’t why I write this blog. I started writing this blog as an experiment and originally tried to write 5 days a week (but was happy if that ended up being 2-3 times a week). My commitment now is 2 per week (which rarely yields 1 or 3). And I haven’t monetized it: there’s no advertising, and while I occasionally talk about where I’m speaking or the like, I haven’t used this as a way to sell things. Hopefully that can continue.
So, the reason I write is to think ‘out loud’. It’s largely for me: it makes me think. I’m just always curious! I’ve previously recounted the story about how I was on a panel answering questions from the audience, and one of my fellow panelists commented that I had an answer for everything. And the reason is in the ongoing attempt to populate the blog, I’ve looked at lots of things. As my client engagements have been in many different areas, I also have wide-ranging experience to draw upon. And I just naturally reflect, but getting concrete: diagramming and/or writing, provides additional benefits.
Thus, the process of continually writing (for over 10 years now) means I’m looking at lots of things, reflecting on them, and sharing my thoughts. I also make a point to look at related fields, and look for connections. I also look at what’s happening with technology. In general, I look with a critical eye, as I was trained as a scientist. I think that’s valuable as well, because there still is a lot of nonsense trotted out, and there’s always some new buzzword that’s being loosely tossed about. Blogging’s given me cause to continue to tune my thinking, and at least some folks have commented that they’ve found it useful.
Mostly I write about things related to technology, learning, and individual and organizational implications. It includes diversions to innovation, design, wisdom, performance support, and the like, because they’ve implications for practice. In many ways I see approaches that aren’t well aligned with how we think, work, and learn, and that strikes me as both a shame, and an opportunity to improve. And that’s what I enjoy, finding ways to improve what we do.
So that’s it: I blog to facilitate my understanding, because cognitive science and technology is my passion. It isn’t a direct business move. I do need to make a living, and prefer to do it in the area of my passion, and fortunately have been successful so far. (Which isn’t to say you shouldn’t find a reason to use me, there are never enough opportunities to assist in improvement, and I’m not a sales person ;). And yes, this life is a learning experience all in itself! I hope this is clear, but in the interests of transparency I welcome your inquiries and comments. Stay curious, my friends.
Mark Kelly C3 Keynote Mindmap
Astronaut Mark Kelly gave a warm, funny, and inspiring talk. He used stories from his youth, learning to fly, becoming an astronaut, and being husband to Gabby Gifford to emphasize key success factors.
(I confess that owing to his style of elocution, punctuating stories with very pithy comments, I may have missed a point or two at the beginning until I picked up on it.)
Coping with Cognition
Our brains are amazing things. They make sense of the world, and have developed language to help us both make better sense together and to communicate our learnings. And yet, this same amazing architecture has some vulnerabilities too. And I just fell prey to one, and it’s making reflect on what we can do, and what we still can’t. Our cognition is powerful, but also limited.
So, yesterday I had a great idea for a post for today. Now, I multi-task, and I have several things going at once. I have strategies to get these things done despite the fact that multi-tasking doesn’t work. So for one, I have a specific goal for several of the projects each day. I write tasks for projects into a project management tool. I even keep windows open to remind me of things to do. And I write non-project oriented tasks into a separate ToDo list. But…
I didn’t document the blog post idea before I did something else, and got distracted by one of my open projects. I don’t know which, but I lost the post. Many times, I can regenerate it, but this time I couldn’t.
See, our brain has limitations, and one of them is a limited working memory. And we have evolved powerful tools to support those gaps, including those mentioned above. But we can’t capture all of them. Will we be able to? Unless I consciously acted at the time to do something, whether asked Siri to note it, or made a note, those ephemeral thoughts can escape. And I’m not sure that’s a bad thing.
The flaws in our thinking actually have advantages. We can let go ideas to deal with new ones. And we can miss things because we’re focusing on something. That’s the power of our architecture. And if we focus on the power, and scaffold as much as we can, and let go what we can’t, we really shouldn’t ask for more.
Our ability to scaffold continues to get better. AI, better interfaces, more processing power, better device interoperations, and smaller and more capable sensors are all ongoing. We’re learning more about putting that to use by via innovation. And yet we’ll still have gaps. I think we should be ok with that. Serendipity and experimentation mean we’ll have unintended consequences, and generally those may be bad, but every once in a while they may be better. And we can’t find that without some ‘wildness’ (which is also an argument for nature conservation). So I’m trying to not get too upset. I’m cutting our cognition some slack. Let’s not lose the ability to be human.
Extending Engagement
My post on why ‘engagement’ should be added to effective and efficient led to some discussion on LinkedIn. In particular, some questions were asked that I thought I should reflect on. So here are my responses to the issue of how to ‘monetize’ engagement, and how it relates to the effectiveness of learning.
So the first issue was how to justify the extra investment engagement would entail. It was an assumption that it would take extra investment, but I believe it will. Here’s why. To make a learning experience engaging, you need some additional things: knowing why this is of interest and relevance to practitioners, and putting that into the introduction, examples, and practice. With practice, that’s going to come with only a marginal overhead. More importantly, that is part of also making it more effective. There is some additional information needed, and more careful design, and that certainly is more than most of what’s being done now. (Even if it should be.)
So why would you put in this extra effort? What are the benefits? As the article suggested, the payoffs are several:
- First, learners know more intrinsically why they should pay attention. This means they’ll pay more attention, and the learning will be more effective. And that’s valuable, because it should increase the outcomes of the learning.
- Second, the practice is distributed across more intriguing contexts. This means that the practice will have higher motivation. When they’re performing, they’re motivated because it matters. If we have more motivation in the learning practice, it’s closer to the performance context, so we’re making the transfer gap smaller. Again, this will make the learning more effective.
- Third, that if you unpack the meaningfulness of the examples, you’ll make the underlying thinking easier to assimilate. The examples are comprehended better, and that leads to more effectiveness.
If learning’s a probabilistic game (and it is), and you increase the likelihood of it sticking, you’re increasing the return on your investment. If the margin to do it right is less than the value of the improvement in the learning, that’s a business case. And I’ll suggest that these steps are part of making learning effective, period. So it’s really going from a low likelihood of transfer – 20-30% say – to effective learning – maybe 70-80%. Yes, I’m making these numbers up, but…
This is really all part of going from information dump & knowledge test to elaborated examples and contextualized practice. So that’s really not about engagement, it’s about effectiveness. And a lot of what’s done under the banner of ‘rapid elearning’ is ineffective. It may be engaging, but it isn’t leading to new skills.
Which is the other issue: a claim that engagement doesn’t equal better learning. And in general I agree (see: activity doesn’t mean effectiveness in a social media tool). It depends on what you mean by engagement; I don’t mean trivialized scores equalling more activity. I mean fundamental cognitive engagement: ‘hard fun’, not just fun. Intrinsic relevance. Not marketing flare, but real value add.
Hopefully this helps! I really want to convince you that you want deep learning design if you care about the outcomes. (And if you don’t, why are you bothering? ;). It goes to effectiveness, and requires addressing engagement. I’ll also suggest that while it does affect efficiency, it does so in marginal ways compared to substantial increases in impact. And that strikes me as the type of step one should be taking. Agreed?
My policies
Like most of you, I get a lot of requests for a lot of things. Too many, really. So I’ve had to put in policies to be able to cope. I like to provide a response (I feel it’s important to communicate the underlying rationale), so I have stock blurbs that I cut and paste (with an occasional edit for a specific context). I don’t want to repeat them here, but instead I want to be clear about why certain types of actions are going to get certain types of response. Consider this a public service announcement.
So, I get a lot of requests to link on LinkedIn, and I’m happy to, with a caveat. First, you should have some clear relationship to learning technology. Or be willing to explain why you want to link. I use LinkedIn for business connections, so I’m linked to lots of people I don’t even know, but they’re in our field.
I ask those not in learntech why they want to link. Some do respond, and often have a real reason (shifting to this field, their title masks a real role), and I’m glad I asked. Other times it’s the ‘Nigerian Prince’ or equivalent. And those will get reported. Recently, it’s new folk who claim they just want to connect to someone with experience. Er, no. Read this blog, instead. I also have a special message to those in learntech with biz dev/sales/etc roles; I’ll link, but if they pitch me, they’ll get summarily unlinked (and I do).
And I likely won’t link to you on Facebook. That’s personal. Friends and family. Try LinkedIn instead.
I get lots of emails, particularly from elearning or tech development firms, offering to have a conversation about their services. I’m sorry, but don’t you realize, with all the time I’ve been in the field, that I have ‘goto’ partners? And I don’t do biz dev: develop contracts and outsource production. As Donald H Taylor so aptly puts it, you haven’t established a sufficient relationship to justify offering me anything.
Then, I get email with announcements of new moves and the like. Apparently, with an expectation that I’ll blog it. WTH? Somehow, people think this blog is for PR. No, as it says quite clearly at the top of the page, this is for my learnings about learning. I let them know that I pay attention to what comes through my social media channels, not what comes unsolicited. I also ask what list they got my name from, so I can squelch it. And sometimes they have!
I used to get a lot of offers to either receive or write blog posts. (This had died down, but has resurrected recently.) For marketing links, obviously. I don’t want your posts; see the above: my learnings! And I won’t write for you for free. Hey, that’s a service. See below.
And I get calls with folks offering me a place at their event. They’re pretty easy to detect: they ask about would I like to have access to a specific audience,… I have learned to quickly ask if it’s a pay to play. It always is, and I have to explain that that’s not how I market myself. Maybe I’m wrong, but I see that working for big firms with trained sales folks, not me. I already have my marketing channels. And I speak and write as a service!
I similarly get a lot of emails that let me know about a new product and invite me to view it and give my opinion. NO! First, I could spend my whole day with these. Second, and more importantly, my opinion is valuable! It’s the basis of 35+ years of work at the cutting edge of learning and technology. And you want it for free? As if. Let’s talk some real evaluation, as an engagement. I’ve done that, and can for you.
As I’ve explained many times, my principles are simple: I talk ideas for free; I help someone personally for drinks/dinner; if someone’s making a quid, I get a cut. And everyone seems fine with that, once I explain it. I occasionally get taken advantage of, but I try to make it only once for each way (fool me…). But the number of people who seem to think that I should speak/write/consult for free continues to boggle my mind. Exposure? I think you’re overvaluing your platform.
Look, I think there’s sufficient evidence that I’m very good at what I do. If you want to refine your learning design processes, take your L&D strategy into the 21st century, and generally align what you do with how we think, work, and learn, let’s talk. Let’s see if there’s a viable benefit to you that’s a fair return for me. Lots of folks have found that to be the case. I’ll even offer the first conversation free, but let’s make sure there’s a clear two-way relationship on the table and explore it. Fair enough?
Ethics and AI
I had the opportunity to attend a special event pondering the ethical issues that surround Artificial Intelligence (AI). Hosted by the Institute for the Future, we gathered in groups beforehand to generate questions that were used in a subsequent session. Vint Cerf, co-developer of the TCP/IP protocol that enabled the internet, currently at Google, responded to the questions. Quite the heady experience!
The questions were quite varied. Our group looked at Values and Responsibilities. I asked whether that was for the developers or the AI itself. Our conclusion was that it had to be the developers first. We also considered what else has been done in technology ethics (e.g. diseases, nuclear weapons), and what is unique to AI. A respondent mentioned an EU initiative to register all internet AIs; I didn’t have the chance to ask about policing and consequences. Those strike me as concomitant issues!
One of the unique areas was ‘agency’, the ability for AI to act. This led to a discussion for a need to have oversight on AI decisions. However, I suggested that humans, if the AI was mostly right, would fatigue. So we pondered: could an AI monitor another AI? I also thought that there’s evidence that consciousness is emergent, and so we’d need to keep the AIs from communicating. It was pointed out that the genie is already out of the bottle, with chatbots online. Vint suggests that our brain is layered pattern-matchers, so maybe consciousness is just the topmost layer.
One recourse is transparency, but it needs to be rigorous. Blockchain’s distributed transparency could be a model. Of course, one of the problems is that we can’t even explain our own cognition in all instances (we make stories that don’t always correlate with the evidence of what we do). And with machine learning, we may be making stories about what the system is using to analyze behaviors and make decisions, but it may not correlate.
Similarly, machine learning is very dependent on the training set. If we don’t pick the right inputs, we might miss some factors that would be important to incorporate in making answers. Even if we have the right inputs, but don’t have a good training set of good and bad outcomes, we get biased decisions. It’s been said that what people are good at is crossing the silos, whereas the machines tend to be good in narrow domains. This is another argument for oversight.
The notion of agency also brought up the issue of decisions. Vint inquired why we were so lazy in making decisions. He argued that we’re making systems we no longer understand! I didn’t get the chance to answer that decision-making is cognitively taxing. As a consequence, we often work to avoid it. Moreover, some of us are interested in X, so are willing to invest the effort to learn it, while others are interested in Y. So it may not be reasonable to expect everyone to invest in every decision. Also, our lives get more complex; when I grew up, you just had phone and TV, now you need to worry about internet, and cable, and mobile carriers, and smart homes, and… So it’s not hard to see why we want to abrogate responsibility when we can! But when can we, and when do we need to be careful?
Of course, one of the issues is about AI taking jobs. Cerf stated that nnovation takes jobs, and generates jobs as well. However, the problem is that those who lose the jobs aren’t necessarily capable of taking the new ones. Which brought up an increasing need for learning to learn, as the key ability for people. Which I support, of course.
The overall problem is that there isn’t a central agreement on what ethics a system should embody, if we could do it. We currently have different cultures with different values. Could we find agreement when some might have different view of what, say, acceptable surveillance would be? Is there some core set of values that are required for a society to ‘get along’? However, that might vary by society.
At the end, there were two takeaways. For one, the question is whether AI can helps us help ourselves! And the recommendation is that we should continue to reflect and share our thoughts. This is my contribution.