Learnlets
Clark Quinn's Learnings about Learning
(The Official Quinnovation blog)

6 December 2017

Conceptual Clarity

Clark @ 8:07 AM

Ok, so I can be a bit of a pedant.  Blame it on my academic background, but I believe conceptual clarity is important! If we play fast and loose with terminology, we can be be convinced of something without truly understanding it.  Ultimately, we can waste money chasing unwarranted directions, and worse, perhaps even do wrong by our learners.

Where do the problems arise?  Sometimes, it’s easy to ride a bizbuzz bandwagon.  Hey, the topic is hot, and it sounds good.  Other times, it’s just too hard to spend the effort. Yet getting it wrong ends up meaning you’re wasting resources.

Let’s be clear, I’m not talking myths. Those abound, but here I’m talking about ideas that are being used relatively indiscriminately, but in at least one interpretation there’s real value.  The important thing is to separate the wheat from the chaff.

Some concepts that are running around recently and could use some clarity are the following:

Microlearning.  I tried to be clear about this here. In short, microlearning is about small chunks where the learning aggregates over time.  Aka spaced learning.  But other times, people really mean performance support (just-in-time help to succeed in the moment). What you don’t want is someone pretending it’s so unique that they can trademark it.

70:20:10.  This is another that some people deride, and others find value in. I’ve also talked about this.   The question is why they differ, and my answer is that the folks who use it as a way to think more clearly about a whole learning experience find value. Those who fret about the label are missing the point.  And I acknowledge that the label is a barrier, but that horse has bolted.

Neuro- (aka brain- ). Yes, our brains are neurologically based. And yes, there are real implications. Some.  Like ‘the neurons that fire together, wire together’.  And yet there’re a whole lot of discussions about neuro that are really at the next higher level: cognitive.  This is just misleading folks to make it sound more scientific.

Unlearning. There’s a lot of talk about unlearning, but in the neurological sense it doesn’t make sense. You don’t unlearn something.  As far as we can tell, it’s still there, just increasingly hard to activate. The only real way to ‘unlearn’ is to learn some other response to the same situation.  You learn ‘over’ the old learning. Or overlearn.  But not unlearn. It’s an unconcept.

Gamification. This is actually the one that triggered this post. In theory, gamification is the application of game mechanics to learning.  Interestingly, Raph Koster wrote that what makes games fun are that they are intrinsically about learning!  However, there are important nuances.  It’s not just about adding PBL (points, badges, and leaderboards). These aren’t bad things, but they’re secondary.  Designing the intrinsic action around the decisions learners need to acquire is a deeper and more meaningful implication.  Yet people tend to ignore the latter because it’s ‘harder’.  Yet it’s really just about good learning design.

There are more, of course, but hopefully these illustrate the problem. (What are yours?)  Please, please, be professional and take the time to get clear about our cognitive architecture enough to ensure that you can make these distinctions on your own. We need the conceptual clarity!  Hopefully then we can reserve excitement for ideas that truly add value.

5 December 2017

Usability and Networks

Clark @ 8:04 AM

As I mentioned in an earlier post, I have been using Safari and Google to traverse the networks. And in a comment, I mentioned that the recent launch of the new Firefox browser was prompting me to switch.  And that’s now been put through a test, and I thought it instructive to share my learnings.

The rationale for the switch is that I don’t completely trust Google and Apple with my data. Or anyone, really, for that matter.  On principle. I had used Safari over Chrome because I trust Apple a wee bit more, and Firefox was a bit slow.  And Safari just released a version that stops videos from auto-starting. And similarly, Google’s search has been the best, and with a browser extension and some adjustments, I was getting ads blocked, tracking stopped, and more.  Still, I wasn’t happy.  And I hadn’t figured out how to do an image search with DuckDuckGo (something I do a fair bit) the last time I tried, so that hadn’t been a search option.

All this changed with the release of Firefox’s new Quantum browser. After a trial spin, the speed was good, as was the whole experience.  Now, I want to have an integrated experience across my devices, so I downloaded the Firefox versions for my iDevices as well.  And, as long as I was changing, I tried DuckDuckGo again, and found it did have browser search.  So I made it my search engine as well.

And, after about a week of experience, I’m not sticking with Firefox.  The desktop version is all I want, but the iDevice versions don’t cut it. I use my toolbar bookmarks a lot.  Many times a day.  And on the iDevices, they do synch, but…they’re buried behind four extra clicks. And that’s just not acceptable.  The user experience kills it for me. Those versions also don’t take advantage of the revised code behind the new desktop version, but it wasn’t the speed that killed the deal.  The point I want to make is that you have to look at the total experience, not just one or another in isolation. It’s time for an ecosystem perspective.

On the other hand, I’m still trying DuckDuckGo.  It seems to have a good output on it’s hits.  And the fact that they’re not tracking me is important.  If I can avoid it, I will.  Sure, my ISP still can track me, and so can Apple, but I’ll keep working on those.  Oddly, it seems to return differently on different devices (?!?!).  Still testing.

And, as long as we’re talking the net, I’m going to do something I don’t usually do here; I’m going to take a position on something besides learning. To do so, let me provide some context. I’ve been on the net since before there was a web.  Way before.  Circa 1978, I was able to send and receive email even though there wasn’t any internet. I was at a uni with ARPANET, however, so I had a taste. Roll forward a decade and more, and I was playing with Gopher and WAIS and USENET before Tim Berners-Lee had created http.  That is, there were other protocols that preceded it. (In fact, I was blasé about the web at first, because of that; doh!)  My point is that I’ve been leveraging the benefits of networks for a bloody long time.

And now we depend on it. The internet is the basis for elearning! And, of course, so much more. It has vastly accelerated our ability to interact. And while that’s created problems, it’s also enabled incredible benefits.  Innovation flourishes when there are open standards.  When people can build upon a solid and open foundation, creativity means new opportunity.  Network effects are true for people and for data.

Which is why I’m firmly in the camp for net neutrality.  This is important!  (It must be, because I used bold, which I almost never do ;). The alternative, where providers will be able to throttle or even bar certain types of data will stifle innovation.  It’s like plumbing, telephone, and electricity: they need to be available as long as you can pay your bill (and there need to be options to support those with limited incomes).  Please, pleaseplease let your elected representatives and the FCC know that this is important to you.

 

29 November 2017

Before the Course

Clark @ 8:04 AM

It appears that, too often, people are building courses when they don’t need to (or, more importantly, shouldn’t).  I realize that there are pressures to make a course when one is requested, including expectations and familiarity, but really, you should be doing some initial thinking about what makes sense.  So here’s a rough guide about the thinking you should do before you course.

FlowchartYou begin with a performance problem.  Something’s not right: calls take too long, sales success rate is too low, there’re too many errors in manufacturing.  So it must need training, right?  Er, no.  There’s this thing that’s called ‘performance consulting‘ that talks about identifying the gaps that could be preventing the desirable outcomes, and they’re not all about gaps that training meets.  So we need to triage, and see what’s broken and what’s the priority.

To start, people can simply not know what they’re supposed to do.  That may seem obvious, but it can in fact be the case.  Thus, there’s a need to communicate. Note that this and all of these are more complex than just ‘communicate’. There are the issues about who needs to communicate, and when, and to whom, etc.  But it’s not (at least initially) a training problem.

If they do know, and could do it but aren’t, the problem isn’t going to be solved by training.  As someone once put it “if they could do it if their life depended on it”, then there’s something else going on. If they’re not following safety procedures because they’re too onerous, a course on it isn’t going to fix it. You need to address their motivation.

Now, if they can’t do it, then could they do it if they had the right tools, or more people, or more time? In other words, is it a resource problem?  And, in one way I like to think about it: can we put the solution in the world, instead of in the head?  Will lookup tables, checklists, step-by-step guides or videos solve the problem? Or even connections to other folks! (There are times when it doesn’t make sense to course or even job-aid; e.g. if it’s changing too fast, or too unique, or…)

And, of course, if you don’t have the right people, training still may not work. If they need to meet certain criteria, but don’t, training won’t solve it.  Training can’t fix color-blindness or lack of height, for instance.

Finally, if the prior solutions won’t solve it, and there’s a serious skill gap, then it’s time for training.  And not just knowledge dump, of course, but models and examples and meaningful (and spaced) practice.

Again, these are all abbreviated, and this is oversimplified.  There’s more depth to be unpacked, so this is just a surface level way to represent that a course isn’t always the solution.  But before you course, consider the other solutions. Please.

28 November 2017

eLearning Land

Clark @ 8:03 AM

This post is just a bit of elearning silliness, parodying our worst instincts…

Welcome back my friends, to the show that never ends. We’re so glad you could attend. Come inside, come inside! – Emerson, Lake & Palmer: Karn Evil 9, 1st Impression, Part 2.

It’s so good to see you, and I hope you’re ready for fun. Let’s introduce you to the many attractions to be found here.  We’ve got entertainment suitable for all ages, and wallets!  You can find something you like here, and for an attractive cost.

snake oil salesmanTo start, we have the BizBuzz arcade. It’s a mirror maze, where all things look alike. Microlearning, contextual performance support, mobile elearning, chunking, just-in-time, it’s all there.  Shiny objects appear and disappear before your eyes!  Conceptual clarity is boring, it’s all about the sizzle.

And over here is the Snake Oil Pool.  It’s full of cures for what ails you!  We’ve got potions and lotions and aisles of styles.  It’s slippery, and unctuous; you can’t really get a handle on it, so how can you go wrong?  Apply our special solution, and your pains go away like magic.  Trust us.

Step right up and ride the Hype Balloon!  It’s a quick trip to the heights, held aloft by empty promises based upon the latest trends: neuro/brain-based, millennial/generations, and more.  It doesn’t matter if it holds water, because it’s lighter than air!

Don’t forget the wild Tech Lifecycle ride. You’ll go up, you’ll go down, you’ll take unpredictable twists, followed by a blazing finale. Get in line early!  You’ll leave with a lighter pocketbook, and perhaps a slight touch of nausea, but no worries, it was fun while it lasted.

Come one, come all! We’ll help you feel better, even if when you leave things aren’t any different. You’ll at least have been taken for a ride.  We’ll hope to see you again soon.

This was a jest, this was only a jest. If this were a real emergency, I’d write a book or something. Seriously, we do have to pay attention to the science in what we’re doing, and view things with a healthy skepticism.  We now return you to your regularly scheduled blog, already in progress.  

22 November 2017

Solutions for Tight Cycles of Assessment

Clark @ 8:03 AM

In general, in a learning experience stretching out over days (as spaced learning would suggest), learners want to regularly get feedback about how they’re doing. As a consequence, you want regular cycles of assessment. However, there’s a conflict.  In workplace performance we produce complex outputs (RFPs, product specs, sales proposals, strategies, etc). These still typically require human oversight to evaluate.  Yet resource limitations are likely in most such situations, so we prefer auto-marked solutions (read: multiple choice, fill-in-the-blank), etc.  How do we reconcile meaningful assessment with realistic constraints?  This is one of the questions I’ve been thinking about, and I thought I’d share my reflections with you.

In workplace learning, at times we can get by with auto-assessment, particularly if we use coaching beyond the learning event.  Yet if it matters, we’d rather them practice things that matter before they actually are used for real work.  And for formal education, we want learners to have at least weekly cycles of performance and assessment.  Yet we also don’t want just rote knowledge checks, as they don’t lead to meaningful performance.  We need some intermediate steps, and that’s what I’ve been thinking on.

Multiple choice mini-scenario structureSo first, in Engaging Learning, I wrote about what I called ‘mini-scenarios’. These are really just better-written multiple-choice questions.  However, such questions don’t ask learners to identify definitions or the like (simple recognition), but instead put learners in contextual situations.  Here, the learner chooses between different decisions. Which means retrieving the information, mapping it to the context,  and then choosing the best answer.  Such a question has a story context, a precipitating situation, and then alternative decisions. (And the alternatives are ways learners go wrong, not silly or obviously incorrect choices).  I suggest that your questions should be like this, but are there more?

Branching scenarios are another, rich form of practice. Here it’s about tying together the decisions (they do tend to travel in packs) and consequences. When you do so, you can provide an immersive experience.  (When designed well, of course.)  They’re a pragmatic approximation of a full game experience.  Full games are really good when you need lots of practice (or can amortize over a large audience), but they’re an additional level of complexity to develop.

Another one that Tom Reeves presented in an article was intriguing. You not only have to make the right choice, but then you also choose the reason why you made that choice. It’s only an additional step, but it gets at the choice and the thinking.  And this is important. It would minimize the likelihood of guessing, and provide a richer basis for diagnosis and feedback.  Of course, no one is producing a ‘question type’ like this that I know of, but it’d be a good one.

An approach we used in the past was to have learners create a complex answer, but have the learner evaluate it! In this case it was a verbal response to a question (we were working on speaking to the media), but then the learner could hear their own answer and a model one.  Of course, you’d want to pair this with an evaluation guide as well. The learner creates a response, and then is presented with their response, a good response, and a rubric about what makes a good answer. Then we ask the learner to self evaluate against the rubric.  This has the additional benefit that learners are evaluating work with guidance, and can internalize the behavior to become a self-improving learner. (This is the basis of ‘reciprocal teaching’, one of the component approaches in Cognitive Apprenticeship.)

Each of these is aut0-(or self-) marked, yet provides valuable feedback to the learner and valuable practice of skills. Which shouldn’t be at the expense of also having instructor-marked complex work products or performances, but can supplement them. The goal is to provide the learner with guidance about how their understanding is progressing while keeping marking loads to a minimum. It’s not ideal, but it’s practical.  And it’s not exclusive of knowledge test as well, but it’s more applied and therefore is likely to be more valuable to the learner and the learning. I’m percolating on this, but I welcome hearing what approaches (and reflections) you have.

21 November 2017

My Professional Learner’s Toolkit

Clark @ 8:07 AM

My colleague, Harold Jarche, recently posted about his professional learning toolkit, reflecting our colleague Jane Hart’s post about a Modern Learner’s Toolkit. It’s a different cut through the top 10 tools.  So I thought I’d share mine, and my reflections.

Favorite browser and search engine: I use Safari and Google, by default. Of course, I keep Chrome and Firefox around for when something doesn’t work (e.g. Qualtrics).  I would prefer another search engine, probably DuckDuckGo, but I’m not facile with it, for instance finding images.

A set of trusted web resources: That’d be Wikipedia, pretty much. And online magazines, such as eLearnMag and Learning Solutions, and ones for my personal interests. I use Pixabay many times to find images.

A number of news and curation tools: I use Google News and the ABC (Oz, not US) in my browser, and the BBC and News apps on my iDevices. I also use Feedblitz to bring blogposts into my email.  I keep my own bookmarks using my browser.

Favorite web course platforms: I haven’t really taken online courses. I’ve used Zoom to share.

A range of social networks: I use LinkedIn professionally, as well as Slack. And Twitter, of course.  I stay in touch with my ITA colleagues via Skype.  Facebook is largely personal.

A personal information system: I use both Notability and Notes to take notes.  Notes more for personal stuff, Notability for work-related. I use Omnigraffle for diagrams and mindmaps.  And OmniOutliner also helps when I want to think hierarchically.

A blogging or website tool: I use WordPress for Learnlets (i.e. here), and I use Rapidweaver for my sites: Quinnovation and my book sites.

A variety of productivity apps and tools: Calendar is crucial, and Pagico keeps me on track for projects. I use Google Maps for navigation. I use SplashID for passwords and other private data. I often read and markup documents on my iPad with GoodReader. CloudClip lets me share a multi-item clipboard across my devices.  Reflection: this overlaps with the personal information system.

A preferred office suite: I don’t have a preferred suite, though I’d like to use the Apple Suite. I use Word to write (Pages hasn’t had industrial-strength outlining), Keynote to create presentations (e.g. one from each suite). I don’t create sheets often.

A range of communication and collaboration tools: I use Google Drive to collaborate on representations.  I have used Dropbox to share documents as well. And of course Mail for email.   Reflection: this overlaps with social networks.

1 or more smart devices: I’d be lost without my iPhone and iPad (neither of which is the latest model). I use the phone for ‘in the moment’ things, the iPad for when I have longer time frames.

So, that’s my toolkit, what’s yours?

Jane's toolkit diagram

16 November 2017

#AECT17 Conference Contributions

Clark @ 8:04 AM

So, at the recent AECT 2017 conference, I participated in three ways that are worth noting.  I had the honor of participating in two sessions based upon writings I’d contributed, and one based upon my own cogitations. I thought I’d share the thinking.

For my own presentation, I shared my efforts to move ‘rapid elearning’ forward. I put Van Merrienboer’s 4 Component ID and Guy Wallace’s Lean ISD as a goal, but recognized the need for intermediate steps like Michael Allen’s SAM, David Merrill’s ‘Pebble in a Pond‘, and Cathy Moore’s Action Mapping. I suggested that these might be too far, and want steps that might be slight improvements on their existing processes. These included three thing: heuristics, tools, and collaboration. Here I was indicating specifics for each that could move from well-produced to well-designed.

In short, I suggest that while collaboration is good, many corporate situations want to minimize staff. Consequently, I suggest identifying those critical points where collaboration will be useful. Then, I suggest short cuts in processes to the full approach. So, for instance, when working with SMEs focus on decisions to keep the discussion away from unnecessary knowledge. Finally, I suggest the use of tools to support the gaps our brain architectures create.   Unfortunately, the audience was small (27 parallel sessions and at the end of the conference) so there wasn’t a lot of feedback. Still, I did have some good discussion with attendees.

Then, for one of the two participation session, the book I contributed to solicited a wide variety of position papers from respected ed tech individuals, and then solicited responses to same.  I had responded to a paper suggesting three trends in learning: a lifelong learning record system, a highly personalized learning environment, and expanded learner control of time, place and pace of instruction. To those 3 points I added two more: the integration of meta-learning skills and the breakdown of the barrier between formal learning and lifelong learning. I believe both are going to be important, the former because of the decreasing half-life of knowledge, the latter because of the ubiquity of technology.

Because the original author wasn’t present, I was paired for discussion with another author who shares my passion for engaging learning, and that was the topic of our discussion table.  The format was fun; we were distributed in pairs around tables, and attendees chose where to sit. We had an eager group who were interested in games, and my colleague and I took turns answering and commenting on each other’s comments. It was a nice combination. We talked about the processes for design, selling the concept, and more.

For the other participation session, the book was a series of monographs on important topics.  The discussion chose a subset of four topics: MOOCs, Social Media, Open Resources, and mLearning. I had written the mLearning chapter.  The chapter format included ‘take home’ lessons, and the editor wanted our presentations to focus on these. I posited the basic mindshifts necessary to take advantage of mlearning. These included five basic principles:

  1. mlearning is not just mobile elearning; mlearning is a wide variety of things.
  2. the focus should be on augmenting us, whether our formal learning, or via performance support, social, etc.
  3. the Least Assistance Principle, in focusing on the core stuff given the limited interface.
  4. leverage context, take advantage of the sensors and situation to minimize content and maximize opportunity.
  5. recognize that mobile is a platform, not a tactic or an app; once you ‘go mobile’, folks will want more.

The sessions were fun, and the feedback was valuable.

15 November 2017

#AECT17 Reflections

Clark @ 8:10 AM

Ok, so I was an academic for a brief and remarkably good period of time (a long time ago). Mind you, I’ve kept my hand in: reviewing journal and conference submissions, writing the occasional book chapter, contributing to some research, even playing a small role in some grant-funded projects. I like academics, it’s just that circumstances took me away (and I like consulting too; different, not one better). However, there’re a lot of benefits from being engaged, particularly keeping up with the state of the art. At least one perspective… Hence, I attended the most recent meeting of the Association of Educational Communications & Technology, pretty much the society for academics in instructional technology.

The event features many of your typical components: keynotes, sessions, receptions, and the interstitial social connections. One of the differences is that there’s no vendor exhibition. And there are a lot of concurrent sessions: roughly 27 per time slot!   Now, you have to understand, there are multiple agendas, including giving students and new faculty members opportunities for presentations and feedback. There are also sessions designed for tapping into the wisdom of the elders, and working sessions to progress understandings. This was only my second, so I may have the overall tenor wrong.  Regardless, here are some reflections from the event:

For one, it’s clear that there’s an overall awareness of what could, and should, be happening in education. In the keynotes, the speakers repeatedly conveyed messages about effective learning. What wasn’t effectively addressed was the comprehensive resistance of the education system to meaningful change.  Still, all three keynotes, Driscoll, Cabrera, and Reeves, commented in one way or another on problems and opportunities in education. Given that many of the faculty members come from Departments of Education, this is understandable.

Another repeated emergent theme (at least for me) was the need for meaningful research. What was expressed by Tom Reeves in a separate session was the need for a new approach to research grounded in focusing on real problems. I’ve been a fan of his call for Design-Based Research, and liked what he said: all thesis students should introduce their topics with the statement “the problem I’m looking at is”. The sessions, however, seemed to include too many small studies. (In my most cynical moments, I wonder how many studies have looked at teaching students or teacher professional development and their reflections/use of technology…).

One session I attended was quite exciting. The topic was the use of neuroscience in learning, and the panel were all people using scans and other neuroscience data to inform learning design. While I generally deride the hype that usually accompanies the topic, here were real researchers talking actual data and the implications, e.g. for dyslexia.  While most of the results from research that have implications for design are still are at the cognitive level, it’s important to continue to push the boundaries.

I focused my attendance mostly on the Organizational Training & Performance group, and heard a couple of good talks.  One was a nice survey of mentoring, looking across the research, and identifying what results there were, and where there were still opportunities for research. Another study did a nice job of synthesizing models for human performance technology, though the subsequent validation approach concerned me.

I did a couple of presentations myself that I’ll summarize in tomorrow’s post, but it was a valuable experience. The challenges are different than in corporate learning technology, but there are interesting outcomes that are worth tracking.  A valuable experience.

10 November 2017

Tom Reeves AECT Keynote Mindmap

Clark @ 7:11 AM

Thomas Reeves opened the third day of the AECT conference with an engaging keynote that used the value of conation to drive the argument for Authentic Learning. Conation is the component of cognition that consists of your intent to learn, and is under-considered. Authentic learning is very much collaborative problem-solving. He used the challenges from robots/AI to motivate the argument.

Mindmap

9 November 2017

Derek Cabrera AECT Keynote Mindmap

Clark @ 7:25 AM

Derek Cabrera opened the second day of the AECT conference with an insightful talk about systems thinking and the implications for education. With humorous examples he covered the elements of systems thinking and why it means we need to switch pedagogies to a constructivist approach.

Mindmap

Next Page »

Powered by WordPress