Learnlets

Secondary

Clark Quinn’s Learnings about Learning

More Marketing Malarkey

10 August 2021 by Clark 2 Comments

As has become all too common, someone decided to point me to some posts for their organization. Apparently, interest was sparked by a previous post of mine where I’d complained about  microlearning. While this one  does a (slightly) better job talking about  microlearning, it is riddled with other problems. So here’s yet another post about  more marketing malarkey.

First, I don’t hate microlearning; there are legitimate reasons to keep content small. It can get rid of the bloat that comes from contentitis, for one. There are solid reasons to err on the side of performance support as well. Most importantly, perhaps, is also the benefit of spacing learning to increase the likelihood of it being available. The thing that concerns me is that all these things are different, and take different design approaches.

Others have gone beyond just the two types I mention. One of the posts  cited a colleague’s more nuanced presentation about small content, pointing out four different ways to use microlearning (though interestingly,  five were cited in the referenced presentation). My problem, in this case, wasn’t the push for microlearning (there were some meaningful distinctions, though no actual mention how they require different design). Instead, it was the presence of myths.

One of the two posts opened with this statement: “The appetite of our employees is not the same therefore, we must not provide them the same bland food (for thought).” This seems a bit of a mashup. Our employees aren’t the same, so they need different things? That’s personalization, no? However, the conversation goes on to say: “It‘s time to put together an appetizing platter and create learning opportunities that are useful and valuable.”  Which seems to argue for engagement. Thus, it seems like it’s instead arguing that people need more engaging content. Yes, that’s true too. But what’s that got to do with our employees not having the same appetite? It  seems to be swinging towards the digital native myth, that employees now need more engaging things.

This is bolstered by a later quote: “When training becomes overwhelming and creates stress, a bite-sized approach will encourage learning.” If training becomes overwhelming and stressful, it  does suggest a redesign. However, my inclination would be to suggest that ramping up the WIIFM and engagement are the solution. A bite-sized approach, by itself, isn’t a solution to engagement. Small wrong or dull content isn’t a solution for dull or wrong content.

This gets worse in the other post. There were two things wrong here. The first one is pretty blatant:

There are numerous resources that suggest our attention spans are shrinking. Some might even claim we now have an average attention span of only 8 seconds, which equals that of a goldfish.

There are, of course, no such resources pointed to. Also, the resources that proposed this have been debunked. This is actually the ‘cover story’ myth of my recent book on myths! In it, I point out that the myth about attention span came from a misinterpreted study, and that our cognitive architecture doesn’t change that fast. (With citations.) Using this ‘mythtake’ to justify microlearning is just wrong. We’re segueing into tawdry marketing malarkey here.

This isn’t the only problem with this post, however. A second one emerges when there’s an (unjustified) claim that learning should have 3E’s: Entertaining, Enlightening, and Engaging. I do agree with Engaging (per the title of my first book), however, there’s a problem with it. And the other ones. So, for Entertaining, this is the followup: “advocates the concept of learning through a sequence of smaller, focused modules.” Why is smaller inherently more entertaining? Also, in general, learning doesn’t work as well when it’s just ‘fun’, unless it’s “hard fun”.

Enlightening isn’t any better. I do believe learning should be enlightening, although particularly for organizational learning it should be transformative in terms of enhancing an individual’s ability to  perform. Just being enlightened doesn’t guarantee that. The followup says: “Repetition, practice, and reinforcement can increase knowledge.” Er, yes, but that’s just good design. There’s nothing unique to microlearning about that.

Most importantly, the definition for Engaging is “A program journey can be spaced enough that combats forgetting curve.” That is spacing! Which isn’t a bad thing (see above), but not your typical interpretation of engaging. This is really confused!

Further, I didn’t even need to fully parse these two posts. Even on a superficial examination, they fail the ‘sniff test’. In general, you should be avoiding folks that toss around this sort of fluffy biz buzz, but even more so when they totally confound a reasonable interpretation of these concepts. This is just more marketing malarkey. Caveat emptor.

(Vendors, please please please stop with the under-informed marketing, and present helpful posts. Our industry is already suffering from too many myths. There’s possibly a short-term benefit, however the trend seems to be that people are paying more attention to learning science. Thus, in the long run I reckon it undermines your credibility. While taking them down is fun and hopefully educational, I’d rather be writing about new opportunities, not remedying the old.  If you don’t have enough learning science expertise to do so, I can help: books, workshops, and/or writing and editing services.)

 

Concept Maps and Learning

3 August 2021 by Clark 1 Comment

Once again, someone notified me of something they wanted me to look at. In this case, a suite of concept maps, with a claim that this could be the future of education. And while I’m a fan of concept maps, I was suspicious of the claim, So, while I’ve written on mindmaps before, it’s time to dig into concept maps and learning.

To start, the main separation between mindmaps and concept maps is labels. Specifically, concept maps have labels that indicate the meaning of  connections between concepts. At least, that’s my distinction. So while I’ve done (a lot of) mindmaps of keynotes, they’re mostly of use to those who also saw the same presentation. Otherwise, the terms and connections don’t necessarily make sense. (Which doesn’t mean a suite of connections can’t be valuable, c.f. Jerry’s Brain, where Jerry Michalski has been tracking his explorations for over two decades!) However, a concept map does a better job of indicating the total knowledge representation.

I know a wee bit about this, because while writing up my dissertation, I had a part-time job working with Professor  Kathy Fisher and SemNet. Kathy Fisher is a biologist and teacher who worked with Joe Novak (who can be considered the originator of concept mapping). SemNet is a Macintosh concept mapping tool (Semantic Network) that Kathy created and used in teaching biology. It allows students to represent their understanding, which instructors can use to diagnose misconceptions.

I also later volunteered for a while with the K-Web project. This was a project with James Burke (of Connections fame) creating maps of the interesting historical linkages his show and books documented. Here again, navigating linkages can be used for educational purposes.

With this background, I looked at this project. The underlying notion is to create a comprehensive suite of multimedia mindmaps of history and the humanities. This, to me, isn’t a bad thing! It provides a navigable knowledge resource that could be a valuable adjunct to teaching. Students can be given tasks to find the relationships between two things, or asked to extend the concept maps, or… Several things, however, are curious at least.

The project claims to be a key to the future of global education. However, as an educational innovation, the intended pedagogical design is worrisome. The approach claims that “They have complete freedom to focus on and develop whichever interests capture their fancy.” and “…the class is exposed to a large range of topics that together provide a comprehensive and lively view of the subject…”  This is problematic for two reasons. First, there appears to be no guarantee that this indeed will provide comprehensive coverage. It’s possible, but not likely.

As a personal example, when I was in high school, our school district decided that the American Civil War would be taught as modules. Teachers chose to offer whatever facets they wanted, and students could take any two modules they wanted. Let me assure you that my knowledge of the Civil War did not include a systematic view of the causes, occurrences, and outcomes, even in ideologically distorted versions. Anything I now know about the Civil War comes from my own curiosity.

Even with the social sharing, a valuable component, there appears to be no guidance to ensure that all topics are covered. Fun, yes. Curricularly thorough, no.

Second, presenting on content doesn’t necessarily mean you’ve truly comprehended it. As my late friend, historian Joseph Cotter, once told me, history isn’t about learning facts, it’s about learning to think like a historian. You may need the cultural literacy first, but then you need to be able to use those elements to make comparisons, criticisms, and more.  Students should be able to  think with these facts.

Another concerning issue in the presentation about this initiative is this claim: “reading long passages of text no longer works very well for the present generation of learners. More than ever, learners are visual learner [sic].” This confounds two myths, the digital native myth with the learning styles myth. Both have been investigated and found to be lacking in empirical support. No one likes to read long passages of text without some intrinsic interest (but we can do that).

In short, while I laud the collection, the surrounding discussion is flawed. Once again, there’s a lack of awareness of learning science being applied. While that’s understandable, it’s not sufficient.  My $0.05.

My ‘Man on the Moon’ Project

20 July 2021 by Clark 8 Comments

There have been a variety of proposals for the next ‘man on the moon’ project since JFK first inspired us. This includes going to Mars, infrastructure revitalization, and more. And I’m sympathetic to them. I’d like us to commit to manufacturing and installing solar panels over all parking lots, both to stimulate jobs and the economy, and transform our energy infrastructure, for instance. However, with my focus on learning and technology, there’s another ‘man on the moon’ project I’d like to see.

I’d like to see an entire K12 curriculum online (in English, but open, so that anyone can translate it). However, there are nuances here. I’m not oblivious to the fact that there are folks pushing in this direction. I don’t know them all, but I certainly have some reservations. So let me document three important criteria that I think are critical to make this work (queue my claim “only two things wrong with education in this country, the curriculum and the pedagogy, other than that it’s fine”).

First, as presaged, it can’t be the existing curriculum.  Common Core isn’t evil, but it’s still focused on a set of elements that are out of touch. As an example, I’ll channel Roger Schank on the quadratic equation: everyone’s learned (and forgotten) it, almost no one actually uses it. Why? Making every kid learn it is just silly. Our curriculum is a holdover from what was stipulated at the founding of this country. Let’s get a curriculum that’s looking forward, not back. Let’s include the ability to balance a bankbook, to project manage, to critically evaluate claims, to communicate visually, and the like.

Second, as suggested, it can’t be the existing pedagogy. Lecture and test don’t lead to retaining and transferring the ability to  do. Instead, learning science tells us that we need to be given challenging problems, and resources and guidance to solve them. Quite simply, we need to practice as we want to be able to perform. Instruction is designed action and  guided reflection.  Ideally, we’d layer on learning on top of learner interests. Which leads to the third component.

We need to develop teachers who can facilitate learning in this new pedagogy. We can’t assume teachers can do this. There are many dedicated teachers, but the system is aligned against effective outcomes. (Just look at the lack of success of educational reform initiatives.) David Preston, with his Open Source Learning has a wonderful idea, but it takes a different sort of teacher. We also can’t assume learners sitting at computers. So, having a teacher support component along with every element is important.

Are there initiatives that are working on all this? I have yet to see any one that’s gotten it  all right.  The ones I’ve seen lack on one or another element. I’m happy to be wrong!

I also recognize that agreeing on all the elements, each of which is controversial, is problematic. (What’s the  right curricula? Direct instruction or constructivist? How do we value teachers in society?) We’d have major challenges in assembling folks to address any of these, let alone all and achieving convergence.

However, think of the upside. What could we accomplish if we had an effective education system preparing youth for the success of our future? What  is the best investment in our future?  I realize it’s a big dream; and I’m not in a position to make it happen. Yet I did want to drop the spark, and see if it fire any imaginations. I’m happy to help. So, this is my ‘man on the moon’ project; what am I missing?

Representation Matters

13 July 2021 by Clark 1 Comment

There is a deep sense of where and how representation matters. Then there are less critical, but still important ways in which presentation counts. It includes talking about stereotypes, and calling out inappropriate labeling. Concepts matter, clarity matters, transparency matters. So here are two situations that are worth critiquing.

The first one that struck me this morning was an announcement. A researcher has created a petition asking Pew Research to stop using the ‘generations’ label. They’ve been using it in their research, and yet (as the petition points out) their own research shows it’s problematic.

Now this is a myth I called out in my last book  (specifically on the topic of problematic beliefs). There are several complaints, such as that the boundaries are arbitrary, and the stereotyping is harmful. While we can differ by age, discrepancies are better explained by experience than by ‘generation’.

Another problem came in an article I was connected to on LinkedIn. In it, they were making the case for micro learning. While there are great reasons to tout the benefits of small bits of timely content, they didn’t really distinguish the uses. Which is a problem, since the different uses require different designs.

Here’s where representation matters. Pew Research’s reputation, in my mind, has gone down. I used to fill out some surveys from them, and stopped because the assumptions in the categories they were using were problematic. Finding out that they’re a major proponent of generations only aggravates that. Can I really trust any results they cite when the foundations are flawed?

Similarly, the organization that’s touting micro learning solutions has just undermined any belief in their credibility to actually do this appropriately. When you tout stuff in ways that show you don’t understand the necessary principles, you damage your reputation. I’m not likely to want to use this firm to design my  solutions.

I push strongly for accuracy. This includes evidence-informed design, conceptual clarity, and transparency of motives. If you tout something, do so in a scrutable way. Marketing malarkey only muddies the water, and our industry has enough of a credibility problem.

Yes, there are more important ways representation matters: for kids to see themselves in culturally desirable roles, for voices to be heard. This is a less important aspect, but quality matters. Look at what you are saying, and ensure that it’s worth your audience’s time!

Misaligned expectations

29 June 2021 by Clark 1 Comment

As part of the Learning Development Conference that’s going on for the next five weeks (not too late to join in!), there have already been events. Given that the focus is on evidence-based approaches, a group set up a separate discussion room for learning science. Interestingly, though perhaps not surprisingly, our discussion ended up including barriers. One of the barriers, as has appeared in several guises across recent conversations, are the expectations on L&D. Some of them are our own, and some are others, but they all hamper our ability to do our best. So I thought I’d discuss some of these misaligned expectations.

One of the most prominent expectations is around the timeframes for L&D work. My take is that after 9/11, a lot of folks didn’t want to travel, so all training went online. Unfortunately (as with the lingering pandemic), there was little focus on rethinking, and instead a mad rush to get things online. Which meant that a lot of content-based training ended up being content-based elearning. The rush to take content and put it onscreen drove some of the excitement around ‘rapid elearning’.

The continuing focus on efficiency – taking content, adding a quiz, and putting it online – was pushed to the extreme.  It’s now an expectation that with an authoring tool and content, a designer can put up a course in 1-2 weeks. Which might satisfy some box-checking, but it isn’t going to lead to any change in meaningful outcomes. Really, we need slow learning! Yet there’s another barrier here.

Too often, we have our own expectation that “if we build it, it is good”. That is, too often we take an order for a course, we build it, and we assume all is well. There’s no measurement to see if the problem is fixed, let alone tuning to ensure it is. We don’t have expectations that we need to be measuring our impact! Sure it’s hard; we have to talk to the business owners about measurement, and get data. Yet, like other areas of the organization, we should be looking for our initiatives to lead to measurable change. One of these days, someone’s going to ask us to justify our expenditures in terms of impact, and we’ll struggle if we haven’t changed.

Of course, another of our misaligned expectations is that our learning design approaches are effective. We still see, too often, courses that are content-dump, not serious solutions. This is, of course, why we’re talking about learning science, but while one of us had support to be evidence-based, others still do not. We face a populace, stakeholders  and our audiences, that have been to school. Therefore, the expectation is that if it looks like school, it must be learning. We have to fight this.

It d0esn’t help that well-designed (and well-produced) elearning is subtly different than just well-produced elearning. We can’t (and, frankly, many vendors get by on this) expect our stakeholders to know the difference, but we must and we must fight for the importance of the difference. While I laud the orgs that have expectations that their learning group is as evidence-based as the rest, and their group can back that up with data, they’re sadly not as prevalent as we need.

There are more, but these are some major expectations that interfere with our ability to do our best. The solution? That’s a good question. I think we need to do a lot more education of our stakeholders (as well as ourselves). We need to (gently, carefully) generate an understanding that learning requires practice and feedback, and extends beyond the event. We don’t need everyone to understand the nuances (just as we don’t need to know the details of sales or operations or…unless we’re improving performance on it), but we do need them to be thinking in terms of reasonable amounts of time to develop effective learning, that this requires data, and that not every problems has a training solution. If we can adjust these misaligned expectations, we just might be able to do our job properly, and help our organizations. Which, really, is what we want to be about anyway.

Update on my events

17 June 2021 by Clark Leave a Comment

In January I posted about my upcoming webinars (now past), workshops, etc. As things open up again (yay, vaccines), some upcoming events will be happening live!  And, of course, virtual. In fact, one starts next week! So I thought it time to update you on the things I’ll be doing. Then we’ll get back to my regular posts ;). So here’s an update on my events.

First, starting next week, is the Learning Development Conference, by the Learning Development Accelerator (caveat: I’m on their advisory board). Last year, it was an experiment. They did several things very well: it was focused on evidence-based approaches, it created timings that worked for a broad section of the world’s populace (e.g. live sessions were offered twice, once early once late), and it had asynchronous content as well as synchronous. It also had ways to maintain contact and discussions. As a result, it was a success, leading to the Accelerator and this second event.

It’s for six weeks, and first I’ve got an asynchronous course on learning science (a subset of the bigger one I do as a blended workshop for HR.com/Allen Academy). I’m also doing two live sessions (at different times) on some of the new results from cognitive science. I’m already dobbed in for one debate, and they’ll likely call on me for more. There are also a suite of the top names in evidence-based L&D appearing doing either or both of live or asynchronous content.

Second, at the end of August, I’ll be speaking at ATD’s International Conference and Exposition. This is a live event in Salt Lake City. (My first since the pandemic!) Of course I’m speaking on learning science; the topic of my book with them. There could even be a book-signing event!  If you don’t know ATD’s ICE, it’s huge, both a blessing and curse. Lots of quality content (ok, mostly ;), almost too many people to find your friends, but lots of new friends to make, with broad coverage. Also, a big exposition (maybe smaller this year ;).

Third, I’ll be at the Learning Guild’s DevLearn again this year. This has always been one of the best conferences because the Guild runs good events (caveat: I’m their first Guild Master). They want it to grow, of course, but as yet it’s still be reasonably sized, and with quality content. For one, I’ll be speaking on learning science implications.

I’ll  also be running a pre-conference workshop on Making Learning Meaningful. And this is, I suggest, truly of interest. I’ve been seeing more and more examples of well-designed content that’s still lacking in engagement, and this workshop is all about that. It’s an area I’ve been actively exploring and synthesizing into practical implications. Like in the series I did on the topic here, I cover how to hook initial interest, then maintain it through the experience. Also considered are the implications for the elements of learning, and a process to make it practical.

I recommend all three (or I wouldn’t be inclined to speak at them). So that’s the current update on my events. Hope to see you at one or another!

Exploring Exploration

15 June 2021 by Clark Leave a Comment

Compass  Learning, I suggest, is action and reflection. (And instruction should be  designed action and  guided reflection.) What that action typically ends up being is some sort of exploration (aka experimentation). Thus, in my mind, exploration is a critical concept for learning. That makes it worth exploring exploration.

In learning, we must experiment (e.g. act) and observe and reflect on the outcomes. We learn to minimize surprise, but we also act to generate surprise. I stipulate that we do so when the costs of getting it wrong are low. That is, making learning  safe. So providing a safe sandbox for exploration is a support for learning. Similarly, have low consequences for mistakes generated through informal learning.

However, our explorations aren’t necessarily efficient nor effective. Empirically, we can make ineffective choices such as changing more than one variable at a time, or missing an area of exploration completely. For instruction, then, we need support. Many years ago, Wallace Feurzig argued for  guided exploration, as opposed to free search (the straw man used to discount constructivist approaches). So putting constraints on the task and/or the environment can support making exploration more effective.

Exploration also drives informal learning. Diversity on a team, properly managed, increases the likelihood of searching a broader space of solutions than otherwise. There are practices that increase the effectiveness of the search. Similarly, exploration should be focused on answering questions. We also want serendipity, but there should be guidelines that keep the consequences under control.

By making exploration safe and appropriately constrained, we can advance our understanding most rapidly, either helping some folks learn what others know, or advance what we all know. Exploration is a key to learning, and we need to understand it. Thus, we should also keep exploring exploration!

New recommended readings

8 June 2021 by Clark Leave a Comment

My Near Book ShelfOf late, I‘ve been reading quite a lot, and I‘m finding some very interesting books. Not all have immediate take homes, but I want to introduce a few to you with some notes. Not all will be relevant, but all are interesting and even important. I‘ll also update my list of recommended readings. So here are my new recommended readings. (With Amazon Associates links: support your friendly neighborhood consultants.)

First, of course, I have to point out my own Learning Science for Instructional Designers. A self-serving pitch confounded with an overload of self-importance? Let me explain. I am perhaps overly confident that it does what it says, but others have said nice things. I really did design it to be the absolute minimum reading that you need to have a scrutable foundation for your choices. Whether it succeeds is an open question, so check out some of what others are saying. As to self-serving, unless you write an absolute mass best-seller, the money you make off books is trivial. In my experience, you make more money giving it away to potential clients as a better business card than you do on sales. The typically few hundred dollars I get a year for each book aren‘t going to solve my financial woes! Instead, it‘s just part of my campaign to improve our practices.

So, the first book I want to recommend is Annie Murphy Paul‘s The Extended Mind. She writes about new facets of cognition that open up a whole area for our understanding. Written by a journalist, it is compelling reading. Backed in science, it’s valuable as well. In the areas I know and have talked about, e.g. emergent and distributed cognition, she gets it right, which leads me to believe the rest is similarly spot on. (Also her previous track record; I mind-mapped her talk on learning myths at a Learning Solutions conference). Well-illustrated with examples and research, she covers embodied cognition, situated cognition, and socially distributed cognition, all important. Moreover, there‘re solid implications for the redesign of instruction. I‘ll be writing a full review later, but here‘s an initial recommendation on an important and interesting read.  

I‘ll also alert you to Tania Luna‘s and LeeAnn Renninger‘s Surprise. This is an interesting and fun book that instead of focusing on learning effectiveness, looks at the engagement side. As their subtitle suggests, it‘s about how to Embrace the Unpredictable and Engineer the Unexpected. While the first bit of that is useful personally, it‘s the latter that provides lots of guidance about how to take our learning from events to experiences. Using solid research on what makes experiences memorable (hint: surprise!) and illustrative anecdotes, they point out systematic steps that can be used to improve outcomes. It‘s going to affect my Make It Meaningful  work!

Then, without too many direct implications, but intrinsically interesting is Lisa Feldman Barrett‘s How Emotions Are Made. Recommended to me, this book is more for the cog sci groupie, but it does a couple of interesting things. First, it creates a more detailed yet still accessible explanation of the implications of Karl Friston‘s Free Energy Theory. Barrett talks about how those predictions are working constantly and at many levels in a way that provides some insights. Second, she then uses that framework to debunk the existing models of emotions. The experiments with people recognizing facial expressions of emotion get explained in a way that makes clear that emotions are not the fundamental elements we think they are. Instead, emotions social constructs! Which undermines, BTW, all the facial recognition of emotion work.

I also was pointed to Tim Harford‘s The Data Detective, and I do think it‘s a well done work about how to interpret statistical claims. It didn‘t grip me quite as viscerally as the afore-mentioned books, but I think that‘s because I (over-)trust my background in data and statistics. It is a really well done read about some simple but useful rules for how to be a more careful reviewer of statistical claims. While focused on parsing the broader picture of societal claims (and social media hype), it is relevant to evaluating learning science as well.  

I hope you find my new recommended readings of interest and value. Now, what are you recommending to me? (He says, with great trepidation. ;)

The case for model answers (and a rubric)

3 June 2021 by Clark 4 Comments

Human body modelAs I‘ve been developing online workshops, I‘ve been thinking more about the type of assessment I want. Previously, I made the case for gated submissions. Now I find another type of interaction I‘d like to have. So here‘s the case for model answers (and a rubric).

As context, many moons ago we developed a course on speaking to the media. This was based upon the excellent work of the principals of Media Skills, and was a case study in my  Engaging Learning book. They had been running a face to face course, and rather than write a book, they wondered if something else could be done. I was part of a new media consortium, and was partnered with an experienced CD ROM developer to create an asynchronous elearning course.  

Their workshop culminated in a live interview with a journalist. We couldn‘t do that, but we wanted to prepare people to succeed at that as an optional extra next step. Given that this is something people really fear (apocryphally more than death), we needed a good approximation. Along with a steady series of exercises going from recognizing a good media quote, and compiling one, we wanted learners to have to respond live. How could we do this?

Fortunately, our tech guy came up with the idea of a programmable answering machine. Through a series of menus, you would drill down to someone asking you a question, and then record an answer. We had two levels: one where you knew the questions in advance, and the final test was one where you‘d have a story and details, but you had to respond to unanticipated questions.  

This was good practice, but how to provide feedback? Ultimately, we allowed learners to record their answers, then listen to their answers and a model answer. What I‘d add now would be a rubric to compare your answer to the model answer, to support self-evaluation. (And, of course, we’d now do it digitally in the environment, not needing the machine.)

So that‘s what I‘m looking for again. I don‘t need verbal answers, but I do want free-form responses, not multiple-choice. I want learners to be able to self-generate their own thoughts. That‘s hard to auto-evaluate. Yes, we could do whatever the modern equivalent to Latent Semantic Analysis is, and train up a system to analyze and respond to their remarks. However, a) I‘m doing this on my own, and b) we underestimate, and underuse, the power of learners to self-evaluate.  

Thus, I‘m positing a two stage experience. First, there‘s a question that learners respond to. Ideally, paragraph size, though their response is likely to be longer than the model one; I tend to write densely (because I am). Then, they see their answer, a model answer, and a self-evaluation rubric.  

I‘ll suggest that there‘s a particular benefit to learners‘ self-evaluating. In the process (particularly with specific support in terms of a mnemonic or graphic model), learners can internalize the framework to guide their performance. Further, they can internalize using the framework and monitoring their application to become self-improving learners.

This is on top of providing the ability to respond in richer ways that picking an option out of those provided. It requires a freeform response, closer to what likely will be required after the learning experience. That‘s similar to what I‘m looking for from the gated response, but the latter expects peers and/or instructors to weigh in with feedback, where as here the learner is responsible for evaluating. That‘s a more complex task, but also very worthwhile if carefully scaffolded.  

Of course, it‘d also be ideal if an instructor is monitoring the response to look for any patterns, but that‘s outside the learners‘ response. So that‘s the case for model answers. So, what say you? And is that supported anywhere or in any way you know?

Overworked IDs

25 May 2021 by Clark 2 Comments

I was asked a somewhat challenging question the other day, and it led me to reflect. As usual, I‘m sharing that with you. The question was “How can IDs keep up with everything, feel competent and confident in our work” It‘s not a trivial question! So I‘ll share my response to overworked IDs.

There was considerable context behind the question. My interlocutor weighed in with her tasks:  

“sometimes I wonder how to best juggle everything that my role requires: project management, design and ux/ui skills, basic coding, dealing with timelines and SMEs and managers. Don‘t forget task analysis and needs assessment skills, making content accessible and engaging. And staying on top of a variety of software.”  

I recognize that this is the life of overworked IDs, particularly if you‘re the lone ID (which isn‘t infrequent), or expected to handle course development on your own. Yet it is a lot of different competencies. In work with IBSTPI, where we‘re defining competencies, we‘re recognizing that different folks cut up roles differently. Regardless, many folks wear different competency requirements that in other orgs are handled by different teams. So what‘s a person to do?

My response focused on a couple of things. First, there‘re the expectations that have emerged. After 9/11, when we were avoiding travel, there was a push for elearning. And, with the usual push for efficiency, rapid elearning became the vogue. That is, tools that made it easy to take PDFs and PPTs and put it up online with a quiz. It looked like lectures, so it must be learning, right?

One of the responses, then, is to manage expectations. In fact, a recent post addressed the gap between what we know and what orgs should know. We need to reset expectations.

As part of that, we need to create better expectations about what learning is. That was what drove the Serious eLearning Manifesto [elearningmanifesto.org], where we tried to distinguish between typical elearning and serious elearning. Our focus should shift to where our first response isn‘t a course!  

As to what is needed to feel competent and confident, I‘ve been arguing there are three strands. For one (not surprisingly ;), I think IDs need to know learning science. This includes being able to fill in the gaps in and update on instructional design prescriptions, and also to be able to push back against bad recommendations. (Besides the book, this has been the subject of the course I run for HR.com via Allen Academy, will be the focus of my presentation at ATD ICE this summer, and also my asynchronous course for the LDC conference.)  

Second, I believe a concomitant element is understanding true engagement. Here I mean going beyond trivial approaches like tarting-up drill-and-kill, and gamification, and getting into making it meaningful. (I‘ve run a workshop on that through the LDA, and it will be the topic of my workshop at DevLearn this fall.)

The final element is a performance ecosystem mindset. That is, thinking beyond the course: first to performance support, still on the optimal execution side of the equation. Then we move to informal learning, facilitating learning. Read: continual innovation! This may seem like more competencies to add on, but the goal is to reduce the emphasis (and workload) on courses, and build an organization that continues to learn. I address this in the  Revolutionize L&D book, and also my mobile course for Allen Interactions (a mobile mindset is, really, a performance ecosystem mindset!).

If you‘re on top of these you should prepared to do your job with competence and confidence. Yes, you still have to navigate organizational expectations, but you‘re better equipped to do so. I‘ll also suggest you stay tuned for further efforts to make these frameworks accessible.  

So, there‘re my responses to overworked IDs. Sorry, no magic bullets, I‘m afraid (because ‘magic‘ isn‘t a thing, sad as that may be). Hopefully, however, a basis upon which to build. That‘s my take, at any rate, I welcome hearing how you‘d respond.

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok