Clark Quinn's Learnings about Learning
(The Official Quinnovation blog)

3 January 2018

2018 Trajectories

Clark @ 8:08 AM

Given my reflections on the past year, it’s worth thinking about the implications.  What trajectories can we expect if the trends are extended?  These are not predictions (as has been said, “never predict anything, particularly the future”).  Instead, these are musings, and perhaps wishes for what could (even should) occur.

I mentioned an interest in AR and VR.  I think these are definitely on the upswing. VR may be on a rebound from some early hype (certainly ‘virtual worlds’), but AR is still in the offing.  And the tools are becoming more usable and affordable, which typically presages uptake.

I think the excitement about AI will continue, but I reckon we’re already seeing a bit of a backlash. I think that’s fair enough. And I’m seeing more talk about Intelligence Augmentation, and I think that’s a perspective we continue to need. Informed, of course, by a true understanding of how we think, work, and learn.  We need to design to work with us.  Effectively.

Fortunately, I think there are signs we might see more rationality in L&D overall. Certainly we’re seeing lots of people talking about the need for improvement. I see more interest in evaluation, which is also a good step. In fact, I believe it’s a good first step!

I hope it goes further, of course. The cognitive perspective suggests everything from training & performance support, through facilitating communication and collaboration, to culture.  There are many facets that can be fine-tuned to optimize outcomes.Similarly, I hope to see a continuing improvement in learning engineering. That’s part of the reason for the Manifesto and the Quinnov 8.  How it emerges, however, is less important than that it does.  Our learners, and our organizations, deserve nothing less.

Thus, the integration of cognitive science into the design of performance and innovation solutions will continue to be my theme.  When you’re ready to take steps in this direction, I’m happy to help. Let me know; that’s what I do!

2 January 2018

Reflections on 2017

Clark @ 8:07 AM

The end of the calendar year, although arbitrary, becomes a time for reflection.  I looked back at my calendar to see what I’d done this past year, and it was an interesting review.  Places I’ve been and things I’ve done point to some common themes.  Such are the  nature of reflections.

One of the things I did was speak at a number of events. My messages have been pretty consistent along two core themes: doing learning better, and going beyond the course.  These were both presented at TK17 that started the year, and were reiterated, one or the other, through other ATD and Guild events.

With one exception. For my final ATD event of the year, I spoke on Artificial Intelligence (AI). It was in China, and they’re going big into AI. It’s been a recurrent interest of mine since I was an undergraduate. I’ve been fortunate to experience some seminal moments in the field, and even dabble.  The interest in AI does not seem to be abating.

Another persistent area of interest has been Augmented Reality (AR) and Virtual Reality (VR). I attended an event focused on Realities, and I continue to believe in the learning potential of these approaches. Contextual learning, whether building fake or leveraging real, is a necessary adjunct to our learning.  One AR post of mine even won an award!

My work continues to be both organizational learning, but also higher education. Interestingly, I spoke to an academic audience about the realities of workplace learning!  I also had a strategic engagement with a higher education institution on improving elearning.

I also worked on a couple of projects. One I mentioned last week, a course on better ID.  I’m still proud of the eLearning Manifesto (as you can see in the sidebar ;).  And I continue to want to help people do better using technology to facilitate learning.  I think the Quinnov 8 are a  good way.

All in all, I still believe that pursuing better and broader learning and performance is a worthwhile endeavor. Technology is a lovely complement to our thinking, but we have to do it with an understanding of how our brains work. My last project from the year is along these lines, but it’s not yet ready to be announced. Stay tuned!

28 December 2017

The Quinnov 8: An online course

Clark @ 8:03 AM

Ok, so I told you the story of the video course I was creating on what I call the Quinnov 8, and now I’ll point to it.  It’s available through Udemy, and I’ve tried to keep the price low.  With their usual discounts, it should be darn near free ;).  Certainly no more than a few cups of coffee.

It’s about an hour of video of me talking, with a few diagrams and text placeholders.  I’ve included quizzes for each of the content sections. Also, I have assignments to go away and apply the principles to your own work.  Finally, I created a page or several for each section showing some ideas, models, and more.

I do not recommend going through it in one run. I can’t control it, but as I mention in the course, you want to space it out. We know that that leads to better outcomes. Instead, I recommend spacing it out a section a week or so perhaps, and doing the work and coming back to reactivate before moving on.

The content is organized around what I’m terming the Quinnov 8, the eight elements I think are core to making the step to better elearning design.  While the ideal is to push to a robust iterative and prototyping model, I’m focusing mostly on the small steps that will give you the greatest leverage. The elements are:

  1. Performance consulting: what to do before you decide to course
  2. Objectives: making the right decisions about what to focus on
  3. SMEs: working with them for objectives and more
  4. Practice: making practice meaningful
  5. Models: the conceptual frameworks that guide performance
  6. Examples: the link between concepts and application.
  7. Engagement: wrapping the front and back to create experiences
  8. Process: the extra steps to make this work

I’m trying to go deep, that is to unpack the levels of cognitive depth to explain how the Quinnov 8 elements work.  I’ve identified the challenges I’ve faced, and I may well update it over time, but it’s at a stage I think I can at least give you the chance to explore.  I welcome your feedback, but I reckon this is one way you can further your understanding on a significant budget.

15 December 2017

Video Lessons

Clark @ 8:02 AM

So, I’ve been creating a ‘deeper elearning’ course for one of the video course providers. And I’m not mentioning where it is (yet), since it’s still under development.  But to do this, I had to do some serious learning about creating video.  And there were some realizations in this, of course.

One of the decisions to be made was how to include graphics.. My mentor/colleague/friend showed me (by video chat) his elegant setup.  He has green screens, and lights, and has a full studio in a separate room as well. Of course, he’s been doing video for decades.  I’ve hardly done much besides taking a multimedia course at least 20 years ago. And narrating the occasional Keynote deck.

In the meantime I asked around, and colleagues were pretty unanimous on ScreenFlow being the tool to use.  So I got a copy. And, indeed, I was able to film myself.  Moreover, I quickly found out I could include diagrams and text right on the screen! That eliminated the need for a green screen.

My video imageI had a couple of lights, and without them my screen reflected on my glasses.  However, that’s not really fixable, since I didn’t get the anti-glare coating when I had them made.  Doh!  Next time, for sure. I positioned a couple of lights off to each side, and they reduced (though not eliminated) the glare.

We were moving my office back to the front of the house (long story), so we moved a bookcase behind me, with my library.  It looks good, but…you don’t see much of it anyway.  I filmed standing up (on my new stand/sit desk converter), and I block most of the background anyway (except for the Albert Einstein poster that sits on the wall).

Having read up,  I knew to have a written script, which, without a prompter, I just positioned to the top of the screen under the camera.  Of course I changed it a bit, and adlibbed a bit, but mostly stuck to what I’d written. It’s not quite as spontaneous (and goofy) as I am in person, but it ensures consistent quality. And I filled in diagrams a few times, and added some text a few times, to help keep pace.

Frankly, it’s not great, but I had a deadline.  It’s too much of me talking, without animation. But this is done by me, alone, under a tight deadline. And that’s my error, too, since I have video anxiety almost as bad as my phone anxiety, and dragged my heels until things were too late.  Dang emotions getting in the way again! (Even when you know this.)

I also created some quizzes, in mini-scenario fashion pretty much. That is, there’s a fair bit of dialog that you either are asked and/or choose to respond with. Because it’s only a multiple choice option, I was somewhat constrained.  I subsequently was prodded for some assignments, and found I could do what I’d talked about.  I used the assignment tool to create questions that asked learners to go out and do things and then provide them with some guidance to self-evaluate.

One thing I learned is that I don’t have a good mental model of how the software works. I ‘get’ the tracks, but there’s another aspect I don’t understand. So, it turns out though I’d filmed myself at 720p, and exported at 720p, it still had an unnecessary border. Fortunately, in stumbling around I found a ‘crop’ setting that forced it to 1280 x 720 (720p), but I don’t understand why that was necessary!?!?

I still want to add some examples (as documents) before I feel it’s fully ready to go. And I now sympathize much more with those who struggle to do good learning design under real-world constraints.  It’s also certainly been an example of my accepting assignments that are within my reach, but not within my grasp; my learning style ;).   More later, but thought I’d share my struggles and learning. I welcome your feedback.

5 December 2017

Usability and Networks

Clark @ 8:04 AM

As I mentioned in an earlier post, I have been using Safari and Google to traverse the networks. And in a comment, I mentioned that the recent launch of the new Firefox browser was prompting me to switch.  And that’s now been put through a test, and I thought it instructive to share my learnings.

The rationale for the switch is that I don’t completely trust Google and Apple with my data. Or anyone, really, for that matter.  On principle. I had used Safari over Chrome because I trust Apple a wee bit more, and Firefox was a bit slow.  And Safari just released a version that stops videos from auto-starting. And similarly, Google’s search has been the best, and with a browser extension and some adjustments, I was getting ads blocked, tracking stopped, and more.  Still, I wasn’t happy.  And I hadn’t figured out how to do an image search with DuckDuckGo (something I do a fair bit) the last time I tried, so that hadn’t been a search option.

All this changed with the release of Firefox’s new Quantum browser. After a trial spin, the speed was good, as was the whole experience.  Now, I want to have an integrated experience across my devices, so I downloaded the Firefox versions for my iDevices as well.  And, as long as I was changing, I tried DuckDuckGo again, and found it did have browser search.  So I made it my search engine as well.

And, after about a week of experience, I’m not sticking with Firefox.  The desktop version is all I want, but the iDevice versions don’t cut it. I use my toolbar bookmarks a lot.  Many times a day.  And on the iDevices, they do synch, but…they’re buried behind four extra clicks. And that’s just not acceptable.  The user experience kills it for me. Those versions also don’t take advantage of the revised code behind the new desktop version, but it wasn’t the speed that killed the deal.  The point I want to make is that you have to look at the total experience, not just one or another in isolation. It’s time for an ecosystem perspective.

On the other hand, I’m still trying DuckDuckGo.  It seems to have a good output on it’s hits.  And the fact that they’re not tracking me is important.  If I can avoid it, I will.  Sure, my ISP still can track me, and so can Apple, but I’ll keep working on those.  Oddly, it seems to return differently on different devices (?!?!).  Still testing.

And, as long as we’re talking the net, I’m going to do something I don’t usually do here; I’m going to take a position on something besides learning. To do so, let me provide some context. I’ve been on the net since before there was a web.  Way before.  Circa 1978, I was able to send and receive email even though there wasn’t any internet. I was at a uni with ARPANET, however, so I had a taste. Roll forward a decade and more, and I was playing with Gopher and WAIS and USENET before Tim Berners-Lee had created http.  That is, there were other protocols that preceded it. (In fact, I was blasé about the web at first, because of that; doh!)  My point is that I’ve been leveraging the benefits of networks for a bloody long time.

And now we depend on it. The internet is the basis for elearning! And, of course, so much more. It has vastly accelerated our ability to interact. And while that’s created problems, it’s also enabled incredible benefits.  Innovation flourishes when there are open standards.  When people can build upon a solid and open foundation, creativity means new opportunity.  Network effects are true for people and for data.

Which is why I’m firmly in the camp for net neutrality.  This is important!  (It must be, because I used bold, which I almost never do ;). The alternative, where providers will be able to throttle or even bar certain types of data will stifle innovation.  It’s like plumbing, telephone, and electricity: they need to be available as long as you can pay your bill (and there need to be options to support those with limited incomes).  Please, pleaseplease let your elected representatives and the FCC know that this is important to you.


28 November 2017

eLearning Land

Clark @ 8:03 AM

This post is just a bit of elearning silliness, parodying our worst instincts…

Welcome back my friends, to the show that never ends. We’re so glad you could attend. Come inside, come inside! – Emerson, Lake & Palmer: Karn Evil 9, 1st Impression, Part 2.

It’s so good to see you, and I hope you’re ready for fun. Let’s introduce you to the many attractions to be found here.  We’ve got entertainment suitable for all ages, and wallets!  You can find something you like here, and for an attractive cost.

snake oil salesmanTo start, we have the BizBuzz arcade. It’s a mirror maze, where all things look alike. Microlearning, contextual performance support, mobile elearning, chunking, just-in-time, it’s all there.  Shiny objects appear and disappear before your eyes!  Conceptual clarity is boring, it’s all about the sizzle.

And over here is the Snake Oil Pool.  It’s full of cures for what ails you!  We’ve got potions and lotions and aisles of styles.  It’s slippery, and unctuous; you can’t really get a handle on it, so how can you go wrong?  Apply our special solution, and your pains go away like magic.  Trust us.

Step right up and ride the Hype Balloon!  It’s a quick trip to the heights, held aloft by empty promises based upon the latest trends: neuro/brain-based, millennial/generations, and more.  It doesn’t matter if it holds water, because it’s lighter than air!

Don’t forget the wild Tech Lifecycle ride. You’ll go up, you’ll go down, you’ll take unpredictable twists, followed by a blazing finale. Get in line early!  You’ll leave with a lighter pocketbook, and perhaps a slight touch of nausea, but no worries, it was fun while it lasted.

Come one, come all! We’ll help you feel better, even if when you leave things aren’t any different. You’ll at least have been taken for a ride.  We’ll hope to see you again soon.

This was a jest, this was only a jest. If this were a real emergency, I’d write a book or something. Seriously, we do have to pay attention to the science in what we’re doing, and view things with a healthy skepticism.  We now return you to your regularly scheduled blog, already in progress.  

22 November 2017

Solutions for Tight Cycles of Assessment

Clark @ 8:03 AM

In general, in a learning experience stretching out over days (as spaced learning would suggest), learners want to regularly get feedback about how they’re doing. As a consequence, you want regular cycles of assessment. However, there’s a conflict.  In workplace performance we produce complex outputs (RFPs, product specs, sales proposals, strategies, etc). These still typically require human oversight to evaluate.  Yet resource limitations are likely in most such situations, so we prefer auto-marked solutions (read: multiple choice, fill-in-the-blank), etc.  How do we reconcile meaningful assessment with realistic constraints?  This is one of the questions I’ve been thinking about, and I thought I’d share my reflections with you.

In workplace learning, at times we can get by with auto-assessment, particularly if we use coaching beyond the learning event.  Yet if it matters, we’d rather them practice things that matter before they actually are used for real work.  And for formal education, we want learners to have at least weekly cycles of performance and assessment.  Yet we also don’t want just rote knowledge checks, as they don’t lead to meaningful performance.  We need some intermediate steps, and that’s what I’ve been thinking on.

Multiple choice mini-scenario structureSo first, in Engaging Learning, I wrote about what I called ‘mini-scenarios’. These are really just better-written multiple-choice questions.  However, such questions don’t ask learners to identify definitions or the like (simple recognition), but instead put learners in contextual situations.  Here, the learner chooses between different decisions. Which means retrieving the information, mapping it to the context,  and then choosing the best answer.  Such a question has a story context, a precipitating situation, and then alternative decisions. (And the alternatives are ways learners go wrong, not silly or obviously incorrect choices).  I suggest that your questions should be like this, but are there more?

Branching scenarios are another, rich form of practice. Here it’s about tying together the decisions (they do tend to travel in packs) and consequences. When you do so, you can provide an immersive experience.  (When designed well, of course.)  They’re a pragmatic approximation of a full game experience.  Full games are really good when you need lots of practice (or can amortize over a large audience), but they’re an additional level of complexity to develop.

Another one that Tom Reeves presented in an article was intriguing. You not only have to make the right choice, but then you also choose the reason why you made that choice. It’s only an additional step, but it gets at the choice and the thinking.  And this is important. It would minimize the likelihood of guessing, and provide a richer basis for diagnosis and feedback.  Of course, no one is producing a ‘question type’ like this that I know of, but it’d be a good one.

An approach we used in the past was to have learners create a complex answer, but have the learner evaluate it! In this case it was a verbal response to a question (we were working on speaking to the media), but then the learner could hear their own answer and a model one.  Of course, you’d want to pair this with an evaluation guide as well. The learner creates a response, and then is presented with their response, a good response, and a rubric about what makes a good answer. Then we ask the learner to self evaluate against the rubric.  This has the additional benefit that learners are evaluating work with guidance, and can internalize the behavior to become a self-improving learner. (This is the basis of ‘reciprocal teaching’, one of the component approaches in Cognitive Apprenticeship.)

Each of these is aut0-(or self-) marked, yet provides valuable feedback to the learner and valuable practice of skills. Which shouldn’t be at the expense of also having instructor-marked complex work products or performances, but can supplement them. The goal is to provide the learner with guidance about how their understanding is progressing while keeping marking loads to a minimum. It’s not ideal, but it’s practical.  And it’s not exclusive of knowledge test as well, but it’s more applied and therefore is likely to be more valuable to the learner and the learning. I’m percolating on this, but I welcome hearing what approaches (and reflections) you have.

16 November 2017

#AECT17 Conference Contributions

Clark @ 8:04 AM

So, at the recent AECT 2017 conference, I participated in three ways that are worth noting.  I had the honor of participating in two sessions based upon writings I’d contributed, and one based upon my own cogitations. I thought I’d share the thinking.

For my own presentation, I shared my efforts to move ‘rapid elearning’ forward. I put Van Merrienboer’s 4 Component ID and Guy Wallace’s Lean ISD as a goal, but recognized the need for intermediate steps like Michael Allen’s SAM, David Merrill’s ‘Pebble in a Pond‘, and Cathy Moore’s Action Mapping. I suggested that these might be too far, and want steps that might be slight improvements on their existing processes. These included three thing: heuristics, tools, and collaboration. Here I was indicating specifics for each that could move from well-produced to well-designed.

In short, I suggest that while collaboration is good, many corporate situations want to minimize staff. Consequently, I suggest identifying those critical points where collaboration will be useful. Then, I suggest short cuts in processes to the full approach. So, for instance, when working with SMEs focus on decisions to keep the discussion away from unnecessary knowledge. Finally, I suggest the use of tools to support the gaps our brain architectures create.   Unfortunately, the audience was small (27 parallel sessions and at the end of the conference) so there wasn’t a lot of feedback. Still, I did have some good discussion with attendees.

Then, for one of the two participation session, the book I contributed to solicited a wide variety of position papers from respected ed tech individuals, and then solicited responses to same.  I had responded to a paper suggesting three trends in learning: a lifelong learning record system, a highly personalized learning environment, and expanded learner control of time, place and pace of instruction. To those 3 points I added two more: the integration of meta-learning skills and the breakdown of the barrier between formal learning and lifelong learning. I believe both are going to be important, the former because of the decreasing half-life of knowledge, the latter because of the ubiquity of technology.

Because the original author wasn’t present, I was paired for discussion with another author who shares my passion for engaging learning, and that was the topic of our discussion table.  The format was fun; we were distributed in pairs around tables, and attendees chose where to sit. We had an eager group who were interested in games, and my colleague and I took turns answering and commenting on each other’s comments. It was a nice combination. We talked about the processes for design, selling the concept, and more.

For the other participation session, the book was a series of monographs on important topics.  The discussion chose a subset of four topics: MOOCs, Social Media, Open Resources, and mLearning. I had written the mLearning chapter.  The chapter format included ‘take home’ lessons, and the editor wanted our presentations to focus on these. I posited the basic mindshifts necessary to take advantage of mlearning. These included five basic principles:

  1. mlearning is not just mobile elearning; mlearning is a wide variety of things.
  2. the focus should be on augmenting us, whether our formal learning, or via performance support, social, etc.
  3. the Least Assistance Principle, in focusing on the core stuff given the limited interface.
  4. leverage context, take advantage of the sensors and situation to minimize content and maximize opportunity.
  5. recognize that mobile is a platform, not a tactic or an app; once you ‘go mobile’, folks will want more.

The sessions were fun, and the feedback was valuable.

15 November 2017

#AECT17 Reflections

Clark @ 8:10 AM

Ok, so I was an academic for a brief and remarkably good period of time (a long time ago). Mind you, I’ve kept my hand in: reviewing journal and conference submissions, writing the occasional book chapter, contributing to some research, even playing a small role in some grant-funded projects. I like academics, it’s just that circumstances took me away (and I like consulting too; different, not one better). However, there’re a lot of benefits from being engaged, particularly keeping up with the state of the art. At least one perspective… Hence, I attended the most recent meeting of the Association of Educational Communications & Technology, pretty much the society for academics in instructional technology.

The event features many of your typical components: keynotes, sessions, receptions, and the interstitial social connections. One of the differences is that there’s no vendor exhibition. And there are a lot of concurrent sessions: roughly 27 per time slot!   Now, you have to understand, there are multiple agendas, including giving students and new faculty members opportunities for presentations and feedback. There are also sessions designed for tapping into the wisdom of the elders, and working sessions to progress understandings. This was only my second, so I may have the overall tenor wrong.  Regardless, here are some reflections from the event:

For one, it’s clear that there’s an overall awareness of what could, and should, be happening in education. In the keynotes, the speakers repeatedly conveyed messages about effective learning. What wasn’t effectively addressed was the comprehensive resistance of the education system to meaningful change.  Still, all three keynotes, Driscoll, Cabrera, and Reeves, commented in one way or another on problems and opportunities in education. Given that many of the faculty members come from Departments of Education, this is understandable.

Another repeated emergent theme (at least for me) was the need for meaningful research. What was expressed by Tom Reeves in a separate session was the need for a new approach to research grounded in focusing on real problems. I’ve been a fan of his call for Design-Based Research, and liked what he said: all thesis students should introduce their topics with the statement “the problem I’m looking at is”. The sessions, however, seemed to include too many small studies. (In my most cynical moments, I wonder how many studies have looked at teaching students or teacher professional development and their reflections/use of technology…).

One session I attended was quite exciting. The topic was the use of neuroscience in learning, and the panel were all people using scans and other neuroscience data to inform learning design. While I generally deride the hype that usually accompanies the topic, here were real researchers talking actual data and the implications, e.g. for dyslexia.  While most of the results from research that have implications for design are still are at the cognitive level, it’s important to continue to push the boundaries.

I focused my attendance mostly on the Organizational Training & Performance group, and heard a couple of good talks.  One was a nice survey of mentoring, looking across the research, and identifying what results there were, and where there were still opportunities for research. Another study did a nice job of synthesizing models for human performance technology, though the subsequent validation approach concerned me.

I did a couple of presentations myself that I’ll summarize in tomorrow’s post, but it was a valuable experience. The challenges are different than in corporate learning technology, but there are interesting outcomes that are worth tracking.  A valuable experience.

10 November 2017

Tom Reeves AECT Keynote Mindmap

Clark @ 7:11 AM

Thomas Reeves opened the third day of the AECT conference with an engaging keynote that used the value of conation to drive the argument for Authentic Learning. Conation is the component of cognition that consists of your intent to learn, and is under-considered. Authentic learning is very much collaborative problem-solving. He used the challenges from robots/AI to motivate the argument.


Next Page »

Powered by WordPress