Learnlets

Secondary

Clark Quinn’s Learnings about Learning

What about books | conferences?

18 May 2021 by Clark Leave a Comment

Responding to a frequent question  yet again, I decided to post an answer to the “what about books | conferences?” question.

And, as usual, the transcript.


Once again, after talking about how learning requires meaningful practice, I was asked the seemingly timeless question: “but what about books” Similarly, I regularly get “what about conferences”   So, for the record, let me say when and why books and lectures make sense. And when not. Hopefully I won‘t have to answer another “what about books | conferences” question.

To start, learning is action and reflection. That is, learning ‘outside‘ formal instruction. We act in the world and reflect on it to cement the lesson. It‘s slightly more complicated, because certain things, e.g. Geary‘s biologically primary things, may not really need reflection. Further some things may be really challenging to learn on your own even with reflection. But basically, doing things and reflecting (which can be reading, experimenting, writing/representing), etc is the way we learn on our own.  

Which, as I‘ve argued before, suggests that instruction  be designed action and guided reflection. That is, instructors should be choosing meaningful activities and scaffolding reflection around it. When we‘re designing for novices [link], in particular, when the learner doesn‘t know what‘s important nor why, we need to do the whole enchilada (darn, now I‘m hungry).

Which also means that when we‘ve segued beyond novice to practitioner (and beyond), we begin to know what‘s important and why, and we just need it. We want resources that can fill in the gaps. We want support for reflection.

So now we can explain why we can attend conferences, read books and articles, and the like. When we‘re deeply engaged in something, whether work or a passion, reading a book, listening to someone tell their story, and the like, serves as the necessary adjunct to our activity! They provide the complement to our own endeavors; the reflection to our action!

Now, hopefully, we‘ll never again need to discuss this. Realistically, we can point people here when we‘ get “what about books | conferences”? At least, that‘s my story, what‘s yours?  

A message to CxOs 2: about org learning myths

11 May 2021 by Clark 2 Comments

When I wrote my last post on a message to CxOs about L&D myths, I got some pushback. Which, for the record, is a good thing; one of us will learn something. As a counter to my claim that L&D often was it’s own worst enemy, there was a counter. The claim was that there are folks in L&D who get it, but fight upward against wrong beliefs. Which absolutely is true as well. So, let‘s also talk about what CxOs need to know about the org learning myths they may believe.  

First, however, I do want to say that there is evidence that L&D isn‘t doing as well as it could and should. This comes from a variety of sources. However, the question is where does the blame lie. My previous post talked about how L&D deludes itself, but there are reasons to also believe in unfair expectations. So here‘s the other side.  

  1. If it looks like schooling… I used this same one against L&D, but it‘s also the case that CxOs may believe this. Further, they could be happy if that‘s the case. Which would be a shame just as I pointed out in the other case. Lectures, information dump & knowledge test, in general content presentation doesn‘t lead to meaningful change in behavior in the absence of activity. Designed action and guided reflection, which looks a lot more like a lab or studio than a classroom, is what we want.
  2. SMEs know what needs to be learned. Research tells us to the contrary; experts don’t have conscious access to around 70% of what they  do (tho’ they do have access to what they know). Just accepting what a SME says and making content around that is likely to lead to a content dump and lack of behavior change. Instead, trust (and ensure) that your designers know more about learning than the SME, and have practices to help ameliorate the problem.
  3. The only thing that matters is keeping costs low.  This might seem to be the case, but it reflects a view that org learning is a necessary evil, not an investment. If we’re facing increasing change, as the pundits would have it, we need to adapt. That means reskilling. And effective reskilling isn’t about the cheapest approach, but the most effective for the money. Lots of things done in the name of learning (see above) are a waste of time and money. Look for impact first.
  4. Courses are the answer to performance issues.  I was regaled with a tale about how sales folks and execs were  insisting that customers wanted training. Without evaluating that claim. I’ll state a different claim: customers want solutions. If it’s persistent skills, yes, training’s the answer. However, a client found that customers were much happier with how-to videos than training for most of the situations. It’s a much more complex story.
  5. Learning stops at the classroom. As is this story. One of the reasons Charles Jennings was touting 70:20:10 was not because of the numbers, but because it was a way to get execs to realize that only the bare beginning came from courses, if at all. There’s ongoing coaching with stretch assignments and feedback, and interacting with other practitioners…don’t assume a course solves a problem. A colleague mentioned how her org realized that it couldn’t create a course without also creating manager training, otherwise they’d undermine the outcomes instead of reinforcing them.
  6. We‘ve invested in an LMS, that‘s all we need. That’s what the LMS vendors want you to believe ;)!  Seriously, if all you’re doing is courses, this could be true, but I’m hoping the above
  7. Customers want training.  Back to an earlier statement, customers want solutions. It is cool to go away to training and get smothered in good food and perks. However, it’s  also known that sometimes that  goes to the manager, not  the person who’ll actually be doing the work! Also, training can’t solve certain types of problems.  There are many types of problems customers encounter, and they have different types of solutions. Videos may be better for things that occur infrequently, onboard help or job aids may meet other needs to unusual to be able to predict for training, etc. We don’t want to make customers happy, we want  to make them successful!
  8. We need ways to categorize people. It’s a natural human thing to categorize, including people. So if someone creates an appealing categorization that promises utility, hey that sounds like a good investment. Except, there are many problems! People aren’t easy to categorize, instruments struggle to be reliable, and vested interests will prey upon the unwary.  Anyone can create a categorization scheme, but validating it, and having it be useful, are both surprisingly big hurdles. Asking people questions about their behavior tends to be flawed for complex reasons. Using such tools for important decisions like hiring and tracking have proven to be unethical. Caveat emptor.
  9. Bandwagons are made to be jumped on. Face it, we’re always looking for new and better solutions. When someone links some new research to a better outcome, it’s exciting. There’s a problem, however. We often fall prey to arguments that appear to be new, but really aren’t. For instance, all the ‘neuro’ stuff unpacks to some pretty ordinary predictions we’ve had for yonks. Further, there are real benefits to machine learning and even artificial intelligence. Yet there’s also a lot of smoke to complement the sizzle. Don’t get misled. Do a skeptical analysis.  This holds doubly true for technology objects. It’s like a cargo cult, what’s has come down the pike must be a new gift from those magic technologists! Yet, this is really just another bandwagon. Sure, Augmented Reality and Virtual Reality have some real potential. They’re also being way overused. This is predictable, c.f. Powerpoint presentations in Second Life, but ideally is avoided. Instead, find the key affordances – what the technology uniquely provides – and match the capability to the need. Again, be skeptical.

My point here is that there can be misconceptions about learning  within  L&D, but it can also be outside perspectives that are flawed. So hopefully, I’ve now addressed both. I don’t claim that this is a necessary and complete set, just certain things that are worth noting. These are org learning myths that are worth trying to overcome, or so I think. I welcome your thoughts!

Evaluating soft skills

27 April 2021 by Clark 3 Comments

As has become a pattern, someone recently asked me how to evaluate soft skills. And without being an expert on soft skill or evaluation, I tried to answer on principle. So I thought about the types of observable data you should expect to find. And that yielded an initial answer. Then I watched an interesting video of a lecture by a scholar and consultant, and it elaborated the challenges. So, there‘s a longer answer too. So here‘s an extended riff on evaluating soft skills.

I started with wondering what performance outcomes would you expect for soft skills. Coupled, as well, with how could you find evidence of these observable differences. As a short answer, I suggested that there should be 3(+) outcomes from effective soft skills training.  

0) the learner should be able to perform in soft skills scenarios (c.f. Will Thalheimer’s LTEM). This is the most obvious. Put them in the situation and ask them to perform. This is the bit that gets re-addressed further down.  

1) the learner should be aware of an improvement in their ability to perform. However, asking immediately can lead to a misapprehension of ability. So, as Will Thalheimer advises in his Performance-Focused Smile Sheets, ask them 3 months later. Also, ask about behavior, not knowledge.   E.g. “Are you using the <> model in your work, and do you notice an improvement in your ability”

2) The ‘customers’ of the learner should notice the improvement. Depending on whether that’s internal or external, it might show up (at least in aggregate) in either 360 eval scores, or some observable metric like customer sat scores. It may be harder to collect this data, but of course it‘s also more valuable.  

3) Finally, their supervisors/managers should notice the improvement, whether observationally or empirically.They should be not only prepared to support the change over time, but asked to look for evidence (including as a basis to fine tune performance).  

All together, triangulating on this should be a way to establish the validity.  

Now, extending this, Guy Wallace tweeted a link to a lecture by Neil Rackham. In it, Neil makes the case that universities need to change to teaching core skills, in particular the 4 C‘s: critical thinking, creativity, communication, and collaboration. He also points out how hard it is to evaluate these without a labor-intensive effort of an individual observing performance. This is a point that others have made, that these skills have hard to observe criteria.  

There‘s some argument about so-called 21C skills, and yet I can agree that these four things would be good. The question is how to assess them reliably. Rackham argues that perhaps AI can help here. Perhaps, but at this point I‘d argue for two things. First, help students self-evaluate (which has the benefits of them understanding what‘s involved). Second, instrumenting environments (say, for instance, with xAPI) in which these activities are performed. There will be data records that can be matched to behaviors, initially for human evaluation, but perhaps ultimately for machine evaluation.  

Of course, this requires assigning meaningful activities that necessarily involve creativity, critical thinking, communication, and/or collaboration. This means project based work, and I‘ve long argued that you can‘t learn such skills without a domain. Actually, to create transferable versions, you‘d need to develop the skills across domains.  

When I teach, I prefer to give group work projects that do require these skills. It was, indeed, hard to mark these extra skills, but I found that scaffolding it (e.g. a ‘how to collaborate‘ document) facilitated good outcomes. Being explicit about the best thinking practices isn‘t only a good idea, it‘s a demonstrably useful approach in general.  

So I think developing skills is important. That means we need a means to be evaluating soft skills. We know it when we see it, but it‘s hard to necessarily find the opportunity, but if we can assign it, we can evaluate and develop these skills more readily. That, I think, is a desirable goal. What think you?

Deep learning and expertise

20 April 2021 by Clark 3 Comments

A colleague asked “is anyone talking about how deep learning requires time, attention, and focus” He was concerned with “the trend that tells us everything must be short.”   He asked if I‘d written anything, and I realize I really haven‘t. Well, I did make a call  for “slow learning” once upon a time, but it‘s probably worth doing it again.   So here‘s a riff on deep learning and expertise.

First, what do we mean by deep learning? Here, I‘m suggesting that the goal of deep learning is expertise. We‘ve automated enough of the component elements that we can use our conscious processes to make expert judgments in addressing performance requirements. This could be following a process, making strategic decisions such as diagnoses and prescriptions, and more. It can also require developing pre-conscious responses, such as we train airline pilots to respond to emergencies.  

Now, these responses can vary in their degree of transfer. Making decisions about how to remedy a piece of machinery that‘s misbehaving is different than deciding how to prioritize the new product improvements. The former is more specific, the latter is more generic. Yet, there are certain things that are relevant to both.  

Another issue is how often it needs to be performed. You can develop expertise much quicker with lots of opportunities to apply the knowledge. It‘s more challenging to achieve when there aren‘t as many times it‘s relevant in the course of your workflow. The aforementioned pilots are training for situations they never hope to see!

Before we get there, however, there‘s one other issue to address: how much has to go in the head, and how much can be in the world?   In general, getting information in the head is hard (if we‘re doing it right), and we should try to avoid it when possible. I argue  for backwards design, starting with what the performance looks like if we‘ve focused on IA (intelligence augmentation ), that is, looking for the ideal combination of smarts between technology (loosely defined) and our heads. As Joe Harless famously said “iInside every fat course there‘s a thin job aid crying to get out.”  

Once we‘ve determined that we need human expertise, we also need to acknowledge that it takes time! I put it this way: the strengthening of connections (what learning is at the neural level) can only be done so much in any one day before the strengthening function fatigues; you literally need sleep before you can learn more. And only so much strengthening can happen in that one day. So to develop strong connections, e.g. strong enough that it will be triggered appropriately, is going to have to be spaced out over time.  

This does depend on the pre-existing knowledge of the learner, but it was Anders Ericsson who posited the approximately 10K hours of practice to achieve expertise. That‘s both not quite accurate and not quite what he said, but as a rule of thumb it may be helpful. The important thing is that not just any practice will work. It takes what he called ‘deliberate practice‘, that is the right next thing for this learner. Continued, over time, as the learners‘ ability increases new practice focuses are necessary.

All that can‘t come from a course (no one is going to sit through 10000 hours!). Instead, if we follow the intent of the 70:20:10 framework, it‘s going to take some initial courses, then coaching, with stretch assignments and feedback, and joining a relevant community of practice, and….

We also can‘t assume that our learners will develop this as efficiently as possible. Unless we‘ve trained them to be good self-learners, it will take guided learning across their experience. Even if it‘s only at a particular point; most people who are pursuing a sport, hobby, what have you, eventually will take a course to get past their own limitations and accelerate development.

The short answer is that deep expertise doesn‘t, can‘t, come from a short learning experience. It comes from an extended learning experience, with spaced, deliberate, and varied practice with feedback. If you want expertise, know what it takes and do it. That‘s true whether you‘re doing it for yourself or you‘re in charge of it for others. Deep learning and expertise comes with hard work. (Also, let‘s make that ‘hard fun‘ ;).  

Andragogy vs Pedagogy

13 April 2021 by Clark 24 Comments

Asked about why I used the word pedagogy instead of andragogy, I think it’s worth elaborating (since I already had in my reply ;) and sharing. In short, I think it‘s a false dichotomy. So here‘s my analysis of andragogy vs pedagogy.

Looking at Knowles‘ andragogy, I think it‘s misconstrued. What he talks about for adults is really true for all learners, taking into account their relative cognitive capability and amount of experience. So I fear that using andragogy will perpetuate the myth that pedagogy is a different learning approach (and keep kids in classrooms listening to lectures and answering rote questions). Empirically, direct instruction works (tho‘ it‘s interpretation is different than the name might imply, I once pointed out how it and constructivism properly construed both really say the same thing ;).  

There was an article  that posited five differences, and I see a major confound; the article‘s talking about andragogy as self-directed learning, and pedagogy as formal instruction. That‘s apples and oranges. It really is more about whether you‘re a novice or a practitioner level and the role of instruction. Age is an arbitrary element here, not a defining factor. Addressing each point:

1. Adults are self-directing learners. No, in things they know they need, they can be, but also they may have their bosses or coaches pointing them to courses. Plus, for areas where the adults are novices, they still need guided instruction. Also, owing to our bad K12 and higher ed, we’re not really enabling learners to be effective and efficient self-directed learners. Further, kids are self-directed about things they‘re interested in. But we make little effort to ground what we do (particularly K6) in any reason why this is on the syllabus.  

2. The role of learner experience. Yes, this matters, but it‘s a continuum. Also, you always want to base instruction on learner experience, because elaboration requires connecting to and building on existing knowledge. Yes, we do tend to do give kids abstract problems (particularly in math), which is contrary to good learning science. “Only two things wrong in education these days, the curriculum and the pedagogy, other than that we‘re fine.” Ahem. We teach the wrong things, badly.  

3. Adults generate interest in useful information. So does everyone, but that‘s not a matter of developmental level. Kids also prefer stuff that‘s relevant. We‘ve developed a curriculum for kids that is out of date, and we don‘t motivate it. Everyone has a curriculum, and there are degrees of self-direction, but it‘s not a binary division.

4. Adult readiness to learn is triggered by relevance (yeah, kind of redundant).Kids also learn better when there‘s a reason. Hence problem-based, service-based, and other such philosophy‘s of learning. Even direct instruction posits meaningful problems. Again, the article‘s comparing an ideal human learning model compared to a broken school model.  

5. What motivates learners are real life outcomes. Really, we‘ve covered this, everyone learns better when there‘s motivation. Children learn for grades because no one‘s made it meaningful for them to care!   Kids will pursue their learning when it makes sense to them. John Taylor Gatto made the case that kids could learn the entire K6 curriculum in 100 hours if they cared! Kids do learn outside of what‘s forced on them from schooling, be it Pokemon, polka, or porcupines.  

Thus, in the comparison between andragogy vs pedagogy, I come down on the side of pedagogy. It‘s the earlier term, and while ped does mean ‘kid‘, I still think it‘s really about learning design. Learning design should be aligned to our brains, not differentiated between child and adult. Yes, there are developmental differences, but they‘re a continuum and it‘s more a matter of capacity, it‘s not a binary distinction. That‘s my take, what‘s yours?

Levels of LXD Design

6 April 2021 by Clark Leave a Comment

I stumbled across the Elements of UX diagram again, and happened to wonder if it would map to LXD. Here’s my stab:

And the text, as usual.


In a justifiably well-known image (PDF), Jesse James Garrett (JJG) detailed the elements of (web) user experience. I‘ve been involved in the parallel development of UX and ID (and cross-fertilized them), so I wondered what the LXD version would be. So, of course, I took a stab at levels of LXD design.

To start with, JJG‘s diagram works from the bottom up. The five levels, in order, are:

  1. The original objectives and user needs.
  2. That leads to content requirements and/or functional specifications.  
  3. The next level is an information architecture or interface design that is structured to meet those needs.  
  4. Those semantic structures are then rendered as an information design with navigation or interface design.
  5. The top level is the visual design, what the user actually sees or experiences.

This systematic breakdown has been well recognized as a useful development framework. The development from need to semantics to implementation syntax suggests a logical development flow. As an aside, no one‘s claiming we should develop in a linear manner, and there tends to be more up and down action in actual practice. Drilling down and then working from the bottom up as well is a well-known cycle of design!  

The learning equivalent, then, should similarly have a structured flow. We want to go from our needs, through various levels of representation, until we reach the learner experience.  

Given that we should be driven not by the goals for the interface but learner needs, I‘ll suggest we start with the performance objectives.   Then, in parallel with user needs, I‘ll stipulate that the other top-level definition comes from the user characteristics. These match the initial level stipulated.  

At the next level, I‘ll suggest that the performance objectives drive assessment specifications, and the other decision at this level is for the pedagogical approach. We need to know what learners need to able to do, and how we‘ll get them there.

As an intermediate representation equivalent to UX‘s information architecture or interface design, I suggest from the assessment we determine the necessary practice activities required, and these are coupled with the necessary content requirements: models and examples, as well as the introduction and closing. Here we‘re still at what‘s required, not how it manifests.  

The next level is where we start getting concrete. We need to pick an overall theme or look and feel, and the flow of the experience. We‘ll also, of course, need to make a consistent interface to support navigation and taking action. We know what we need to have, but we haven‘t actually rendered it yet.  

Finally, we must render the necessary media. This will be the videos, audios, text, diagrams, images, and more that comprise the experience. This includes the actions to be taken and the associated consequences of each choice.  

That‘s the equivalent structure I‘m suggesting are the different levels of LXD design. Of course, this is a thought exercise, and so I may well have made some interpretations you could disagree with. For instance, I may have slavishly followed JJG’s levels too closely. Let me know! Also, it‘s not clear whether this is a useful representation, so far it‘s sort of a ‘because it‘s there‘ effort ;). You can let me know your thoughts on that, too!  

Performance Support and Bad Design

30 March 2021 by Clark Leave a Comment

Here’s a story about where performance support would’ve made a task much easier.

And, as always, the text.


The other day, I had a classic need for performance support. Of course, it didn‘t exist. So here‘s a cognitive story about when and where a job aid would help.

Our Bosch dishwasher stopped near the beginning of the cycle, and displayed an icon of a water tap. The goal was to get the dishwasher running again. What with the layer of undrained water, we figured there was some sort of problem with the drain, clogged or the pump broken. M‘lady had cleaned the drain, but the icon persisted. What now? Of course we could call a service person, but trying to be handy and frugal (and safe), we wanted to find out if it was something I could deal with. So, off to the manual.

Well, in this case, since I didn‘t know where the manual was, I went online. I accessed the site and downloaded the manual. Only to find no guide to what the icons mean. What?!? This violates what we know about our brains, in this case that our memory is limited. The support section of the site did list the error codes, but numerically, not by icon.  So, I had an indication I couldn’t map to a problem, let alone a  solution.  

This is a real flaw! If you‘re gonna use icons, provide a guide!  Don’t assume they’re interpretable. (This had happened once before with this same appliance, with an impenetrable icon and no clue.) As a result, I had to call the service line. That wait took awhile (with more people staying home, they‘re using their dishwashers more, and the appliances are therefore breaking down more). Once, the call dropped. The second time I had to stop because I had an upcoming call. The third time, however, I got through.

And a perfectly nice person listened, asked some questions, and then instructed me through a process. After hitting cancel (which automatically tries to drain everything and reset to zero) by simultaneously pressing two buttons linked by a line on the control panel, I heard noises in the sink like it was draining. After a minute, I was told to go ahead and open it up (yep, drained), turn it off and on, and then try running the cleaning cycle again. And, voila, it worked! (Yay!)

So, what‘s wrong with this picture? First of all, there should be a clear explanation of what the icon means, as indicated above. Second, it should be clearly tied to a process to address the problem, including intermediate steps.This is so common, I am quite boggled that the great engineers that made our (very good) dishwasher aren‘t complemented with a great technical communications team who write up a useful manual to support. It. Is. Just. Silly!

Note: this isn‘t a learning experience. It‘s just fine that I don‘t recall what the last time‘s icon was or what it meant, and maybe what this icon meant and what I should do. It should be infrequent enough that it‘d be unreasonable for me to have to recall. Instead, I should be able to look it up. Put information in the world!  In the long term, this should save them buckets of money because most people could self help. Clearly, they‘ve gone to numeric codes, but they could‘ve just added in the associated icons, or given a mapping from icon to numeric code. Something to help folks who have the pics.  

This is just bad design, and it‘s so obvious how to ameliorate it. People will self-help many times, but only if they can!   Just as you shouldn‘t be creating a training course when a job aid will do, you can save a help call when a job aid can address most of the problems. Use performance support when it makes sense, and doing so comes from understanding how we actually think, work, and learn. When you do, you can design solutions that meet real needs. And that‘s what we want to do, no?

A bad question

18 March 2021 by Clark 2 Comments

On Twitter today was a question from an organization that, frankly, puzzled me. Further, I think it’s important to understand  why this was a bad question. So here let me unpack several illustrative problems.

First, the question asks “What kind of learning do you prefer?” My initial response is: why would you ask that? What learners prefer has little to do with what outcomes you need to achieve.  We should design for the learning outcomes.

Then, there’s the list of elements:

  • Video-based learning
  • Article-based learning
  • How to guides
  • Interactive quizzes

There are several problems with this list. First, why this subset? This isn’t a full suite of alternatives. What about simulations, scenarios, or games? AR or VR? Podcasts? Why this selection?

Then, the options lack full definitions. What do they mean by ‘video-based learning’?  Is it just a video, with no assessment? Is it really ‘learning’ then? Of course, if the ‘-based’ means assessment as well, how is that separate from ‘interactive quizzes’? Similarly for articles. What is included?

Yet guides and quizzes aren’t ‘-based’. Are we assuming they’re full learning solutions? That’s questionable. A how-to guide, aka performance support, might yield an outcome, but it doesn’t guarantee learning. There are lots of factors that would influence that. And interactive quizzes, without models and examples, would be a slow way to develop expertise.

Another problem is in the separation of the elements. So, for instance, a ‘how to’ guide could be a video or an article! There’s the Youtube video I used to fix my dryer, or the step by step instructions I used to figure out how to run cables on a monitor. Likewise, interactive quizzes could include video or point to an article. These aren’t mutually exclusive categories.

The point is that this is a bad question. It’s already been taken down (I wasn’t the only one to question it!). Still, there’re lessons to be learned. (Maybe the most important is to ensure your social media marketing person has enough knowledge of learning not to do such silly things, but I can’t assume that’s the locus of the problem. It’s just a hypothesis I’ve seen play out elsewhere. ;) While there are times it makes sense to ask provocative questions, there’s also a reason to have conceptual clarity.  At least, that’s my take, I welcome yours!

 

 

Animation thoughts

9 March 2021 by Clark 4 Comments

Sparked by a conversation, I generate some animation thoughts.

And, as always, a transcript.


In a conversation the other day, my colleague mentioned how she was making a practice of creating animations. I found this interesting, because while I think animations are important, I don‘t do them all that much (or so I thought). Particularly intriguing was the notion of what principles might guide animations, including when to use them. I was prompted to reflect, and so here are some animation thoughts.

First, let‘s be clear what I mean. I‘ve argued that we don‘t use graphic novel/comic formats enough, and that likewise applies to cartoons. Which are also known as animations. Yet, that‘s not really what I‘m talking about. I think we could use them more, but that‘s another reflection.

Instead, here I‘m talking about animated diagrams. And I think there are times when these are not just engaging, but cognitively important. Diagrams map conceptual relationships to spatial ones, and can add additional coding with color and shape. Animations add the dimension of time, so these relationships can change. In my categorization, these are dynamic diagrams, useful when the conceptual relationships change in important ways depending on other factors.

Interestingly, in the conversation, it came up what one form of her animations were diagram builds.  I use diagrams a lot, not only to communicate, but as a tool for my own understanding! And, I‘d done some builds, but after Will Thalheimer‘s Presentation Science course I realized I needed to do that more systematically (and now do so).  Building diagrams is helpful. Cognitively, a diagram can be overwhelming if there are too many elements. By starting at one point, and gradually adding in other elements, you can prevent cognitive overload. And in a presentation, in particular, you want to highlight important points.  

However, I also think that there are things worth indicating how they work dynamically. Like how a content system would work, e.g. context and rules combining to pull content out by description. Or how coordinates change based upon trigonometric values. I haven‘t done much of this, for the simple reason that I don‘t have a good animation tool. And, yes, I‘m aware that you do motion in PowerPoint and/or Keynote, but I haven‘t gotten into it. Time for a skill upgrade!

There are problems with animations, and guidelines. John Sweller‘s cognitive load plays out with Dick Mayer‘s work on multimedia research (as captured in his book with Ruth Clark: eLearning and the Science of Instruction), as indicated above. Thus, you shouldn‘t try to have people read text while watching visual dynamics (use audio). Also, you should help people focus attention by removing extraneous details and/or highlighting the appropriate focus.  

The general principles of media apply as well. Accessibility suggests some alternate representations. Timing suggests having a pause ability for any animation longer than a certain time, and of course the ability to replay. Similarly, the animation design should use appropriate white space, highlighting, and other aspects that make it visually clear and appealing.  

Overall, I‘d suggest that there are times when animations are the best option for conveying dynamic conceptual information. To use them, however, you have to take into account our cognitive limitations. So, these are some of my animation thoughts. I welcome yours.  

ID Support Thyself

2 March 2021 by Clark Leave a Comment

Want to dig a bit deeper into improving design processes. Here, I look at tools,  asking IDs to ‘support thyself’.

As usual, the transcript:


One of the things I do is help organizations improve their design processes. Last week, I talked about when to team up in the process of learning design. Another component of good design, besides knowing when and how to draw in more minds, is baking learning science into your processes. That‘s where tools help. I expect that most orgs do have process support, but…baking in learning science seems not to be there. So here I‘m exhorting IDs to ‘Support Thyself’.  

As I discuss in my forthcoming book, there are nuances to each of the elements of learning design (as I also talked about for Learnnovators). That includes meaningful practice, useful models, motivating intros, and more. The question is how to help ensure that as you develop them, you make sure to address all the elements.

One approach, of course, is to use checklists. Atul Gawande has made the case for checklists in his The Checklist Manifesto.  In this great book, he talks about his own inspiring efforts in the context of other high-risk/high-value endeavors such as flight and construction.   There are clear benefits.

The point is that checklists externalize the important elements, supporting us in not forgetting them. It‘s easy when you do yet another task, to think you‘ve completed a component because you‘ve done it so many times before. Yet this can lead to errors. So having an external framework is useful. That‘s part of the rationale behind the Serious eLearning Manifesto!

I had originally been thinking about templates, and that‘s another way. And here, I‘m not talking about tarted-up quiz show templates. Instead, I mean a tool that leaves stubs for the important things that should be included. In examples, for instance, you could leave a placeholder for referencing the model, and for the underlying thinking. Really, these are checklists in another format.  All in all, these are ways that you can  Support Thyself!

What you don‘t want to do is make it too constraining. You want to create a minimum floor of quality, without enforcing a ceiling. At least other than the ones your own schedule and budget will import. But you want to be creative while also maintaining effectiveness.

And you can do this in your authoring tool. Just as you may have a template you reuse to maintain look and feel, you can have placeholders for the elements. You can also provide guidance for the elements, in a variety of ways.

There are lots of forms of performance support. And, just as we should be using them to assist our performers (even doing backwards design to design the tools first then any learning), we should be using them to overcome our own cognitive limitations. Our cognitive architecture is amazing, but it‘s prone to all sorts of limitations (there‘s no perfect answer). We can suffer from functional fixedness, set effects, confirmation bias, and more.  

I‘ll admit that I created an ID checklist. The only problem was it had 178 elements, which might be unwieldy (though it did go through the whole process). But you should make sure that whatever tools you do have cover the necessary elements you need. I did create a more reasonable one to accompany my ‘Make it Meaningful‘ initiative (coming soon to a theater or drive-in near you).  

Our brains have limitations that influence our ability to design. Fortunately, we can use technology as support to minimize the impact of those limitations and maximize the contributions of our outcomes. And we should. Thus, my encouragement for IDs to Support Thyself!

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok