Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Misaligned expectations

29 June 2021 by Clark 1 Comment

As part of the Learning Development Conference that’s going on for the next five weeks (not too late to join in!), there have already been events. Given that the focus is on evidence-based approaches, a group set up a separate discussion room for learning science. Interestingly, though perhaps not surprisingly, our discussion ended up including barriers. One of the barriers, as has appeared in several guises across recent conversations, are the expectations on L&D. Some of them are our own, and some are others, but they all hamper our ability to do our best. So I thought I’d discuss some of these misaligned expectations.

One of the most prominent expectations is around the timeframes for L&D work. My take is that after 9/11, a lot of folks didn’t want to travel, so all training went online. Unfortunately (as with the lingering pandemic), there was little focus on rethinking, and instead a mad rush to get things online. Which meant that a lot of content-based training ended up being content-based elearning. The rush to take content and put it onscreen drove some of the excitement around ‘rapid elearning’.

The continuing focus on efficiency – taking content, adding a quiz, and putting it online – was pushed to the extreme.  It’s now an expectation that with an authoring tool and content, a designer can put up a course in 1-2 weeks. Which might satisfy some box-checking, but it isn’t going to lead to any change in meaningful outcomes. Really, we need slow learning! Yet there’s another barrier here.

Too often, we have our own expectation that “if we build it, it is good”. That is, too often we take an order for a course, we build it, and we assume all is well. There’s no measurement to see if the problem is fixed, let alone tuning to ensure it is. We don’t have expectations that we need to be measuring our impact! Sure it’s hard; we have to talk to the business owners about measurement, and get data. Yet, like other areas of the organization, we should be looking for our initiatives to lead to measurable change. One of these days, someone’s going to ask us to justify our expenditures in terms of impact, and we’ll struggle if we haven’t changed.

Of course, another of our misaligned expectations is that our learning design approaches are effective. We still see, too often, courses that are content-dump, not serious solutions. This is, of course, why we’re talking about learning science, but while one of us had support to be evidence-based, others still do not. We face a populace, stakeholders  and our audiences, that have been to school. Therefore, the expectation is that if it looks like school, it must be learning. We have to fight this.

It d0esn’t help that well-designed (and well-produced) elearning is subtly different than just well-produced elearning. We can’t (and, frankly, many vendors get by on this) expect our stakeholders to know the difference, but we must and we must fight for the importance of the difference. While I laud the orgs that have expectations that their learning group is as evidence-based as the rest, and their group can back that up with data, they’re sadly not as prevalent as we need.

There are more, but these are some major expectations that interfere with our ability to do our best. The solution? That’s a good question. I think we need to do a lot more education of our stakeholders (as well as ourselves). We need to (gently, carefully) generate an understanding that learning requires practice and feedback, and extends beyond the event. We don’t need everyone to understand the nuances (just as we don’t need to know the details of sales or operations or…unless we’re improving performance on it), but we do need them to be thinking in terms of reasonable amounts of time to develop effective learning, that this requires data, and that not every problems has a training solution. If we can adjust these misaligned expectations, we just might be able to do our job properly, and help our organizations. Which, really, is what we want to be about anyway.

Doing Gamification Wrong

22 June 2021 by Clark 8 Comments

roulette wheelAs I’ve said before, I’m not a fan of ‘gamification’. Certainly for formal learning, where I think intrinsic motivation is a better area to focus on than extrinsic. (Yes, there are times it makes sense, like tarting up rote memory development, but it’s under-considered and over-used.)  Outside of formal learning, it’s clear that it works in certain places. However, we need to be cautious in considering it a panacea. In a recent instance, I actually think it’s definitely misapplied. So here’s an example of doing gamification wrong.

This came to me via a LinkedIn message where the correspondent pointed me to their recent blog article. (BTW, I don’t usually respond to these, but if I do, you’re going to run the risk that I poke holes. 😈) In the article, they were talking about using gamification to build organizational engagement. Interestingly, even in their own article, they were pointing to other useful directions unknowingly!

The problem, as claimed, is that working remote can remove engagement. Which is plausible. The suggestion, however, was that gamification was the solution. Which I suggest is a patch upon a more fundamental problem. The issue was a daily huddle, and this quote summarizes the problem: “there is zero to little accountability of engagement and participation “.  Their solution: add points to these things. Let me suggest that’s wrong.

What facilitates engagement is a sense of purpose and belonging. That is, recognizing that what one does contributes to the unit, and the unit contributes to the organization, and the organization contributes to society. Getting those lined up and clear is a great way to build meaningful engagement. Interestingly, even in the article they quote: “to build true engagement, people often need to feel like they are contributing to something bigger than themselves.” Right! So how does gamification help? That seems to be trying to patch a  lack of purpose. As I’ve argued before, the transformation is not digital first, it’s people first.

They segue off to microlearning, without (of course) defining it. They ended up meaning spaced learning (as opposed to performance support). Which, again, isn’t gamification but they push it into there. Again, wrongly. They do mention a successful instance, where Google got 100% compliance on travel expenses, but that’s very different than company engagement. It’s  got to be the right application.

Overall, gamification by extrinsic motivation can work under the right circumstances, but it’s not a solution to all that ails an organization. There are ways and times, but it’s all too easy to be doing gamification wrong. ‘Tis better to fix a broken culture than to patch it. Patching is, at best, a temporary solution. This is certainly an example.

 

Exploring Exploration

15 June 2021 by Clark Leave a Comment

Compass  Learning, I suggest, is action and reflection. (And instruction should be  designed action and  guided reflection.) What that action typically ends up being is some sort of exploration (aka experimentation). Thus, in my mind, exploration is a critical concept for learning. That makes it worth exploring exploration.

In learning, we must experiment (e.g. act) and observe and reflect on the outcomes. We learn to minimize surprise, but we also act to generate surprise. I stipulate that we do so when the costs of getting it wrong are low. That is, making learning  safe. So providing a safe sandbox for exploration is a support for learning. Similarly, have low consequences for mistakes generated through informal learning.

However, our explorations aren’t necessarily efficient nor effective. Empirically, we can make ineffective choices such as changing more than one variable at a time, or missing an area of exploration completely. For instruction, then, we need support. Many years ago, Wallace Feurzig argued for  guided exploration, as opposed to free search (the straw man used to discount constructivist approaches). So putting constraints on the task and/or the environment can support making exploration more effective.

Exploration also drives informal learning. Diversity on a team, properly managed, increases the likelihood of searching a broader space of solutions than otherwise. There are practices that increase the effectiveness of the search. Similarly, exploration should be focused on answering questions. We also want serendipity, but there should be guidelines that keep the consequences under control.

By making exploration safe and appropriately constrained, we can advance our understanding most rapidly, either helping some folks learn what others know, or advance what we all know. Exploration is a key to learning, and we need to understand it. Thus, we should also keep exploring exploration!

The case for model answers (and a rubric)

3 June 2021 by Clark 4 Comments

Human body modelAs I‘ve been developing online workshops, I‘ve been thinking more about the type of assessment I want. Previously, I made the case for gated submissions. Now I find another type of interaction I‘d like to have. So here‘s the case for model answers (and a rubric).

As context, many moons ago we developed a course on speaking to the media. This was based upon the excellent work of the principals of Media Skills, and was a case study in my  Engaging Learning book. They had been running a face to face course, and rather than write a book, they wondered if something else could be done. I was part of a new media consortium, and was partnered with an experienced CD ROM developer to create an asynchronous elearning course.  

Their workshop culminated in a live interview with a journalist. We couldn‘t do that, but we wanted to prepare people to succeed at that as an optional extra next step. Given that this is something people really fear (apocryphally more than death), we needed a good approximation. Along with a steady series of exercises going from recognizing a good media quote, and compiling one, we wanted learners to have to respond live. How could we do this?

Fortunately, our tech guy came up with the idea of a programmable answering machine. Through a series of menus, you would drill down to someone asking you a question, and then record an answer. We had two levels: one where you knew the questions in advance, and the final test was one where you‘d have a story and details, but you had to respond to unanticipated questions.  

This was good practice, but how to provide feedback? Ultimately, we allowed learners to record their answers, then listen to their answers and a model answer. What I‘d add now would be a rubric to compare your answer to the model answer, to support self-evaluation. (And, of course, we’d now do it digitally in the environment, not needing the machine.)

So that‘s what I‘m looking for again. I don‘t need verbal answers, but I do want free-form responses, not multiple-choice. I want learners to be able to self-generate their own thoughts. That‘s hard to auto-evaluate. Yes, we could do whatever the modern equivalent to Latent Semantic Analysis is, and train up a system to analyze and respond to their remarks. However, a) I‘m doing this on my own, and b) we underestimate, and underuse, the power of learners to self-evaluate.  

Thus, I‘m positing a two stage experience. First, there‘s a question that learners respond to. Ideally, paragraph size, though their response is likely to be longer than the model one; I tend to write densely (because I am). Then, they see their answer, a model answer, and a self-evaluation rubric.  

I‘ll suggest that there‘s a particular benefit to learners‘ self-evaluating. In the process (particularly with specific support in terms of a mnemonic or graphic model), learners can internalize the framework to guide their performance. Further, they can internalize using the framework and monitoring their application to become self-improving learners.

This is on top of providing the ability to respond in richer ways that picking an option out of those provided. It requires a freeform response, closer to what likely will be required after the learning experience. That‘s similar to what I‘m looking for from the gated response, but the latter expects peers and/or instructors to weigh in with feedback, where as here the learner is responsible for evaluating. That‘s a more complex task, but also very worthwhile if carefully scaffolded.  

Of course, it‘d also be ideal if an instructor is monitoring the response to look for any patterns, but that‘s outside the learners‘ response. So that‘s the case for model answers. So, what say you? And is that supported anywhere or in any way you know?

How to be an elearning expert

1 June 2021 by Clark 3 Comments

I was asked (and have been a time or two before): “What’s the one most important thing you’d like to tell to be successful Ed Tech industry leader” Of course there wasn‘t just one ;). Still, looking at colleagues who I think fit that characterization, I find some commonalities that are worth sharing. So here‘s one take on how to be an elearning expert.

Let‘s start with that ‘one thing‘.   Which is challenging, since it‘s more than one thing! Still, I boiled it down into two components: know your stuff, and let people know.   That really is the core. So let‘s unpack that some more.   The first thing is to establish credibility. Which means demonstrating that you track and promote the right stuff.  

Some folks have created a model that they tout. Cathy Moore has Action Mapping, Harold Jarche has PKM, Con Gottfredson has the 5 moments of need, and so on.   It‘s good having a model, if it‘s a good, useful one (there are people who push models that are hype or ill-conceived at best). Note that it‘s not necessarily the case that these folks are just known for this model, and most of these folks can talk knowledgeably about much more, but ‘owning‘ a model that is useful is a great place to be. (I occasionally regret that I haven‘t done a good job of branding my models.) They understand their model and its contribution, it‘s a useful one, and therefore they contribute validly that way and are rightly recognized.

Another approach like this is owning a particular domain. Whether gaming (e.g. Karl Kapp), visuals (Connie Malamed), design (Michael Allen), mixed realities (Ann Rollins), AI (Donald Clark), informal (Jane Hart), evaluation (Will Thalheimer), management (Matt Richter), and so on, they have deep experience and a great conceptual grasp in a particular area. Again, they can and do speak outside this area, but when they talk about these topics in particular, what they say is worthy of your attention!

Then there are other folks who don‘t necessarily have a single model, but instead reliably represent good science. Julie Dirksen, Patti Shank, Jane Bozarth, Mirjam Neelen, and others  have established a reputation for knowing the learning science and interpreting it in accurate, comprehensible, and useful ways.  

The second point is that these folks write and talk about their models and/or approaches. They‘re out there, communicating. It‘s about reliably saying the important things again and again (always with a new twist). A reputation doesn‘t just emerge whole-cloth, it‘s built step by step. They also practice what they preach, and have done the work so they can talk about it. They talk the talk and walk the walk. Further, you can check what they say.  

So how to start? There are two clear implications. Obviously, you have to Know. Your. Stuff! Know learning, know design, know engagement, know tech. Further, know what it means in practice!   You can focus deeply in one area, or generate one useful and new model, or have a broad background, but it can‘t just be in one thing. It‘s not just all your health content for one provider. What you‘re presenting needs to be representative and transferable.  Further, you need to keep up to date, so that means continually learning: reading, watching, listening.

Second, it‘s about sharing. Writing and speaking are the two obvious ways. Sure, you can host a channel: podcast, vlog, blog, but if you‘re hosting other folks, you‘re seen as well connected but not necessarily as the expert. Further, I reckon you have to be able to write and speak (and pretty much all of these folks do both well).   So, start by speaking at small events, and get feedback to improve. Study good presentation style. Then start submitting for events like the Learning Guild, ATD, or LDA (caveats on all of these owing to various relationships, but I think they‘re all scrutable). I once wrote about how to read and write proposals, and I think my guidance is still valid.

Similarly, write. Learning Solutions or eLearn Mag are two places to put stuff that‘s sensibly rigorous but written for practitioners.   Take feedback to heart, and deliberately improve. Make sure you‘re presenting value, not pitching anything. What conferences and magazines say about not selling, that your clear approach is what sells, is absolutely true.  

Also, make sure that you have a unique ‘voice’. No one needs the same things others are saying, at least in the same way. Have a perspective, your own take. Your brand is not only what you say, but how you say it.

A related comment: track some related fields. Most of the folks I think of as experts have some other area they draw inspiration from. UX/UI, anthropology, software engineering, there are many fields and finding useful insight from a related one is useful to the field and keeps you fresh.

Oh, one other thing. You have to have integrity. People have to be able to trust what you say. If you push something for which you have a private benefit, or something that‘s trendy but not real, you will lose whatever careful credibility you‘ve built up. Don‘t squander it!  

So that‘s my take on how to be an elearning expert. So, what have I missed?

A message to CxOs 2: about org learning myths

11 May 2021 by Clark 2 Comments

When I wrote my last post on a message to CxOs about L&D myths, I got some pushback. Which, for the record, is a good thing; one of us will learn something. As a counter to my claim that L&D often was it’s own worst enemy, there was a counter. The claim was that there are folks in L&D who get it, but fight upward against wrong beliefs. Which absolutely is true as well. So, let‘s also talk about what CxOs need to know about the org learning myths they may believe.  

First, however, I do want to say that there is evidence that L&D isn‘t doing as well as it could and should. This comes from a variety of sources. However, the question is where does the blame lie. My previous post talked about how L&D deludes itself, but there are reasons to also believe in unfair expectations. So here‘s the other side.  

  1. If it looks like schooling… I used this same one against L&D, but it‘s also the case that CxOs may believe this. Further, they could be happy if that‘s the case. Which would be a shame just as I pointed out in the other case. Lectures, information dump & knowledge test, in general content presentation doesn‘t lead to meaningful change in behavior in the absence of activity. Designed action and guided reflection, which looks a lot more like a lab or studio than a classroom, is what we want.
  2. SMEs know what needs to be learned. Research tells us to the contrary; experts don’t have conscious access to around 70% of what they  do (tho’ they do have access to what they know). Just accepting what a SME says and making content around that is likely to lead to a content dump and lack of behavior change. Instead, trust (and ensure) that your designers know more about learning than the SME, and have practices to help ameliorate the problem.
  3. The only thing that matters is keeping costs low.  This might seem to be the case, but it reflects a view that org learning is a necessary evil, not an investment. If we’re facing increasing change, as the pundits would have it, we need to adapt. That means reskilling. And effective reskilling isn’t about the cheapest approach, but the most effective for the money. Lots of things done in the name of learning (see above) are a waste of time and money. Look for impact first.
  4. Courses are the answer to performance issues.  I was regaled with a tale about how sales folks and execs were  insisting that customers wanted training. Without evaluating that claim. I’ll state a different claim: customers want solutions. If it’s persistent skills, yes, training’s the answer. However, a client found that customers were much happier with how-to videos than training for most of the situations. It’s a much more complex story.
  5. Learning stops at the classroom. As is this story. One of the reasons Charles Jennings was touting 70:20:10 was not because of the numbers, but because it was a way to get execs to realize that only the bare beginning came from courses, if at all. There’s ongoing coaching with stretch assignments and feedback, and interacting with other practitioners…don’t assume a course solves a problem. A colleague mentioned how her org realized that it couldn’t create a course without also creating manager training, otherwise they’d undermine the outcomes instead of reinforcing them.
  6. We‘ve invested in an LMS, that‘s all we need. That’s what the LMS vendors want you to believe ;)!  Seriously, if all you’re doing is courses, this could be true, but I’m hoping the above
  7. Customers want training.  Back to an earlier statement, customers want solutions. It is cool to go away to training and get smothered in good food and perks. However, it’s  also known that sometimes that  goes to the manager, not  the person who’ll actually be doing the work! Also, training can’t solve certain types of problems.  There are many types of problems customers encounter, and they have different types of solutions. Videos may be better for things that occur infrequently, onboard help or job aids may meet other needs to unusual to be able to predict for training, etc. We don’t want to make customers happy, we want  to make them successful!
  8. We need ways to categorize people. It’s a natural human thing to categorize, including people. So if someone creates an appealing categorization that promises utility, hey that sounds like a good investment. Except, there are many problems! People aren’t easy to categorize, instruments struggle to be reliable, and vested interests will prey upon the unwary.  Anyone can create a categorization scheme, but validating it, and having it be useful, are both surprisingly big hurdles. Asking people questions about their behavior tends to be flawed for complex reasons. Using such tools for important decisions like hiring and tracking have proven to be unethical. Caveat emptor.
  9. Bandwagons are made to be jumped on. Face it, we’re always looking for new and better solutions. When someone links some new research to a better outcome, it’s exciting. There’s a problem, however. We often fall prey to arguments that appear to be new, but really aren’t. For instance, all the ‘neuro’ stuff unpacks to some pretty ordinary predictions we’ve had for yonks. Further, there are real benefits to machine learning and even artificial intelligence. Yet there’s also a lot of smoke to complement the sizzle. Don’t get misled. Do a skeptical analysis.  This holds doubly true for technology objects. It’s like a cargo cult, what’s has come down the pike must be a new gift from those magic technologists! Yet, this is really just another bandwagon. Sure, Augmented Reality and Virtual Reality have some real potential. They’re also being way overused. This is predictable, c.f. Powerpoint presentations in Second Life, but ideally is avoided. Instead, find the key affordances – what the technology uniquely provides – and match the capability to the need. Again, be skeptical.

My point here is that there can be misconceptions about learning  within  L&D, but it can also be outside perspectives that are flawed. So hopefully, I’ve now addressed both. I don’t claim that this is a necessary and complete set, just certain things that are worth noting. These are org learning myths that are worth trying to overcome, or so I think. I welcome your thoughts!

A message to CxOs: about L&D myths

4 May 2021 by Clark 3 Comments

If you’re a CEO, COO, CFO, and the like, are you holding L&D to account? Because much of what I see coming out of L&D doesn’t stand up to scrutiny. As I’ve cited in books and presentations, there’s evidence that L&D isn’t up to scratch. And I think you should know a few things that may be of interest to you. So here’re some L&D myths you might want to watch out for.

  1. If it looks like school, it must be learning. We’ve all been to school, so we know what learning looks like, right? Except, do you remember how effective school actually was? Did it give you many of the skills you apply in your job now?  Maybe reading and writing, but beyond that, what did you learn about business, leadership, etc? And how  did you learn those things? I’ll bet not by sitting and listening to lectures presented via bulletpoints. If it looks like schooling, it’s probably a waste of time and money. It should look more like lab, or studio.
  2. If we’re keeping our  efficiency in line with others, we’re doing good. This is a common belief amongst L&D: well, our [fill in the blank: employees served per L&D staff member | costs per hour of training | courses run per year | etc.] is the same or better than the industry average, so we’re doing good. No, this is all about efficiency, not effectiveness. If they’re not reporting on measurable changes in the improvement of business metrics, like sales, customer service, operations,e tc, they’re not demonstrating their worth. It’s a waste of money.
  3. We produce the courses our customers need. Can they justify that? It’s a frequent symptom that the courses that are asked for have little relation to the actual problem. There are many reasons for performance problems, and a reliable solution is to throw a course at it. Without knowing whether it’s truly a function of lack of skill. Courses can’t address problems like the wrong incentives, or a lack of resources. If you’re not ensuring that you’re only using courses when they make sense, you’re throwing away money.
  4. Job aids aren’t our job.  Performance should be the job, not just courses. As Joe Harless famously said: “Inside every fat course there‘s a thin job aid crying to get out.” There are many times when a job aid is a better solution than a course. To believe otherwise is one of the classic L&D myths. If they’re avoiding taking that on, they’re avoiding a cheaper and more effective solution.
  5. Informal learning isn’t our job. Well, it might not be if L&D truly doesn’t understand learning, but they should. When you’re doing trouble-shooting, research, design, etc., you don’t know the answer when you start. That’s learning too, and there is a role for active facilitation of best principles. Assuming people know how to do it isn’t justifiable. Informal learning is the key to innovation, and innovation is a necessary differentiation.
  6. Our LMS is all we need. Learning management systems (which is a misnomer, they’re course management systems) manage courses well. However, if they’re trying to also be resource portals, and social media systems, and collaboration tools, they’re unlikely to be good at all that. Yet those are also functions that affect optimal performance and continual innovation (the two things I argue  should be the remit of L&D). Further, you want the right tool for the job. One all-singing, all-dancing solution isn’t the way to bet for IT in general, and that holds true for L&D as well.
  7. Our investment in evaluation instruments is valuable. If you’re using some proprietary tools that purport to help you identify and characterize individuals, you’re probably being had. If you’re using it for hiring and promotion, you’re also probably violating ethical guidelines. Whether personality, or behavior, or any other criteria, most of these are methodologically and psychometrically flawed. You’re throwing away money. We have a natural instinct to categorize, but do it on individual performance, not on some flawed instrument.
  8. We have to jump on this latest concept.  There’re a slew of myths and misconceptions running around that are appealing and yet flawed. Generations, learning styles, attention spans, neuro-<whatever> and more are all appealing, and also misguided. Don’t spend resources on investing in them without knowing the real tradeoffs and outcomes.These are classic L&D myths.
  9. We  have to have this latest technology. Hopefully you’re resistant to new technologies unless you know what they truly will do for your organization. This holds true for L&D as well. They’re as prone to lust after VR and AR and AI as the rest of the organization. They’re also as likely to spend the money without knowing the real costs and consequences. Make sure they’re coming from a place where they know the unique value the technology brings!

There’s more, but that’s enough for now. Please, dig in. Ask the hard questions. Get L&D to be scrutable for real results, not platitudes. Ensure that you’re not succumbing to L&D myths. Your organization needs it, and it’s time to hold them to account as you do the rest of your organization. Thanks, and wishing you all the best.

Something that emerged from a walk, and, well, I had to get it off my chest. I welcome your thoughts.

Book hiccups

23 March 2021 by Clark Leave a Comment

As much as writing books is something I do (and I’m immodestly proud of the outcomes), they don’t always come out the way I expect. And that turns out to be true for almost every one!  So here, for the record and hopefully as both mea culpas and lessons learned, are my book hiccups. And you really don’t have to read this, unless you want some things to check for.

After my first book,  Engaging Learning, came out, someone asked me “how do I know it’s really your book?” He had a valid point, because while there was a bio, there was no picture of me. Somehow, I just expected it (and if memory serves, they’d asked for one). Yet it didn’t appear on the dust jacket nor on the author page. In fact, the only Wiley book that  did have my picture ended up being the next one.

Shortly after my next book came out,  Designing mLearning,  I got an email asking for clarification. The correspondent pointed to a particular diagram, and asked what I meant. It turns out, in editing (they’d outsourced it, I understand), someone had reversed the meaning of a caption for a diagram! Worse, I hadn’t caught it. At this time I can no longer find what it was, but it was an unhappy experience.

For my third book,  The Mobile Academy, I asked my friend and colleague John Ittelson to write the preface. And somehow, it wasn’t in the initial printing!  That was a sad oversight, but fortunately they remedied it very quickly.

I had been upset by how expensive the first two books were. Consequently, I was pleased to find out that my fourth,  Revolutionize  Learning & Development, that I really wanted to see do well, was priced much more reasonably. Of course, then I found out why; it was made with paper that wasn’t of the best quality. At least it’s affordable, and I continue to hear from people who have found it useful.

I’m happy to say that the next one,  Millennials, Goldfish & Other Training Misconceptions  has been hiccup free. After switching to ATD Press (they’d been a co-publisher of the previous book), they did a great job with the design, taking my notion of humorous sketches for each topic and executing against it graphically. It’s been well-recognized.

Unfortunately, as I just found out after getting my mitts on the most recent one,  Learning Science for Instructional Designers,  two of the four blurbs I solicited from esteemed colleagues don’t show up in the book!  They do show up on the ATD site, at least (and of course they’re on my own page for the book). I didn’t get a copy of the back cover beforehand, so I couldn’t have checked. My apologies to them. I checked, and it turns out having to do with a space issue because of book formatting. 🤷  Other than that, I’m  as  happy with this book as the last (that is, really happy)!

I can say that I’ve always tried to write in a way that focuses on the aspects that relate to our mental architecture. The goal is that as the technology changes, the implications are still appropriate. Our brains aren’t changing as fast at the tech! I guess I’m just not ready to accept planned obsolescence, so I’m keeping them available.

So there you have it, the book hiccups that can come with publishing. If you’ve made it this far, at least I hope you have some more things to check to make sure your books come out as good as possible.

 

Buzzwords and Branding

26 January 2021 by Clark Leave a Comment

I was reflecting on a few things on terminology, buzzwords and branding in particular. And, as usual, learning out loud, here are my reflections.


The script:

So I’ve been known to take a bit of a blade to buzzwords (c.f. microlearning). And, I reckon there’s a distinction between vocabulary and hype. Further, I get the need for branding (and have been slack on my own part).  So, here I talk about buzzwords and branding.

First, vocabulary is important. I’m a stickler (I’m sure some would say pedantic ;) about conceptual clarity. We need to have clear language to distinguish between different concepts. (You shouldn’t say ‘cat’ when you mean ‘dog’, someone’s likely to get a wee bit confused!)

And, to be clear, there’s internal and external vocabulary. For instance, other people don’t really care about objectives, they just want outcomes. This internal vocabulary can be shortcuts, and help us minimize what we need to say to still communicate. Brevity is the soul of wit, after all.

And then there’s hype. The distinction, I reckon, is when we start tossing in buzzwords that are new, drawn from elsewhere, and promise great things. Adaptive and neuro- are two examples of buzzphrases that are open to interpretation but sound intriguing. Yet they require careful examination.

Then, there’s branding. You attach a label to something to identify it specifically. Harold Jarche’s Personal Knowledge Mastery (PKM), for instance, is a brand for a framework. So, too, would be Michael Allen’s SAM (Successive Approximation Model) and CCAF (Context-Challenge-Activity-Feedback). They’re ways to package up good ideas. And of course, t0 take ownership.

This latter step, I confess, I’ve failed on. The alignment in Engaging Learning and the different categories of mobile are two places I dropped the ball. I recently tried a brief attempt to remedy another, when I released the Performance Ecosystem Maturity Model.

I  do have the 4C’s of Mobile, but while that turns out to be useful, it’s not the most important characterization. In a conversation with someone the other day, he asked what I called the mobile framework I mentioned and he found useful. And I didn’t have an answer. I’ve talked about it before, but I didn’t label it. And yet it’s kind of the most important way to look at mobile! I use it as the organizing framework when I talk about mobile (really, the performance ecosystem):

  • Augmenting formal learning
  • Performance support (mobile’s natural niche)
  • Social (more the informal)
  • Contextual (mobile’s unique opportunity)

I wasn’t sure what to brand this, so for the moment it’s the Four Modes of mLearning (4M? 4MM?).

And for games, that alignment I mentioned I briefly termed the EEA: Effectiveness-Engagement Alignment. The point is that the elements that lead to effective education practice, and the ones that lead to engaging experiences, have a perfect alignment. It’s been a good basis for design for me. But, again, that labeling came more than a decade after the book first came out.

Ok, so I was counting on the ‘Quinnovation’ branding. And that’s worked, but it’s not quite enough to hang products on. So…I’m working on it. (And it may be that having ‘Learnlets’ separate from Quinnovation is another self-inflicted impediment!)

Still, I think it’s important to distinguish between buzzwords and branding. And they shouldn’t be the same (trademarking ‘microlearning’, anyone ;). Again, vocabulary is important, for clarity, not hype. And branding is good for attribution. But they’re not the same thing. Those are my thoughts, what are yours?

Update on my workshops

13 January 2021 by Clark Leave a Comment

Just as I did an update on my books, it’s time to also let you know about some workshop opportunities. Together, I think they create a coherent whole. They’re scattered around a bit, so here I lay out how they fit together, how they’re run, what they cover, and how you can find them. They’re not free, but they’re reasonably priced, with reputable organizations. So here’s an update on my workshops.

First, they’re three pieces of the picture. I talk about two things, generally. It comes from my cheeky quip that L&D isn’t doing near what it could and should, and what it  is doing, it’s doing badly. So, that first part is about the larger performance ecosystem, and the second part is about learning experience design (LXD). And, that latter part actually pulls apart into two pieces.

I see LXD as the elegant integration of learning science with engagement. Thus, you need to understand learning science (and the associated elements). Then, you  also  need to understand what makes an engaging experience. So, two workshops address each of these.

The learning science workshop is being run under the auspices of HR.com (brokered through the Allen Academy). It’s under their professional education series, called Effective Learning Strategies. It’s a five week course (with a delayed sixth week). There are readings, a weekly session, and assignments. You can earn a certificate. In it I cover the basics of cognitive science, the learning outcomes, social/cultural/emotional elements, and the implications for design. It’s just what you need to know, and very much aligned with my forthcoming book!

The second part of the story is about the engagement side. While I’ve tried to boil down learning science into the necessary core, there are other resources. This isn’t well covered. And note, I’m  not talking about tarted-up drill-and-kill, gamification, ‘click to see more’, etc. Instead, I’m going deep into building, and maintaining: motivation, reducing anxiety, and more. Formally, it’s the Make It Meaningful workshop. This is a four week course, with videos to present the information, then live sessions to practice application, and takeaway assignments from the Learning Development Accelerator. It’s based upon the learnings from my book on designing learning games,  Engaging Learning,  but I’ve spent months this past summer making it more general, going deeper, validating the newest information, and making it accessible and comprehensible.

The final story is the performance ecosystem workshop. In what may seem a silly approach, it manifests as a course on mobile! However, once you recognize that mobile is about pretty much everything but courses (and can do contextual, which is an important new direction). It makes sense. When I was writing the mobile book, the intent was that it be a stealth approach to shift the L&D mindset away from just courses. Which, of course, was made more clear with my Revolutionize L&D book. So I hope you can see that this course, too, has a solid foundation. It’s about courses, performance support, informal and social learning, contextual opportunities, and strategy, in six weeks of online sessions, with a tiny bit of reading, and interim assignments. It’s by the Allen Academy directly.

Together, I think these three workshops provide the knowledge foundations you need to run a L&D operation. Two talk about what makes courses that are optimally engaging and effective, and one looks at the rest of the picture. Evidence suggests there’s a need. And I’ve worked hard to ensure that they’ve got the right stuff. So that’s an update on my workshops. I welcome your thoughts and feedback.  (And, yes, I’d like to pull them all together in one place, but I haven’t found a platform I like yet; stay tuned!)

 

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok