Learnlets

Secondary

Clark Quinn’s Learnings about Learning

You are here: Home / Archives for design

The Wrong Bucket Lists

19 April 2022 by Clark 2 Comments

color bucketsOur brains like to categorize things; in fact, we can’t really not do it. This has many benefits: we can better predict outcomes when we can categorize the situation, we can respond in appropriate ways via shared conceptualizations, and so on. It also has some downsides: stereotyping, for one. I reckon there’re tradeoffs, of course. But we also have to worry about when we over-use categorization, we can risk making the wrong bucket lists.

Our desire for simplification and categorization is manifest. The continued interest in reading one’s horoscope, for instance. And the continued success of personality typings, despite the evidence of their lack of utility. Other than the Big 5 or HEXACO, the rest are problematic at best. I’m just reading Annie Murphy Paul’s The Cult of Personality Testing (the predecessor to her The Extended Mind) and hearing abuses like Rorschach tests being used in child custody decisions is really horrific. Similarly, to hear that people are being denied employment based on their color (not race, but their ‘color’ on a particular test, blue or orange) isn’t new and continues (as does the other, sadly).  Most of these tests don’t stand up to scientific scrutiny!

This is the explanation of learning styles, too, another myth that won’t die. Generations similarly. We like to have simplification. Further, there are times it’s useful. For example, recording your blood type can prevent potentially life-threatening complications. Having a basis to adapt learning, such as people’s performance (success or failure), also. Even more so if additional factors are added, such as confidence. Yet, we can overdo it. We might over-categorize, and miss important nuances.

Todd Rose’s The End of Average made an excellent case for not trying to conform people into one bucket. In it, he points out that when we assign a single grade for complex performance, we miss important nuances. For instance, if you get it wrong, why did you get it wrong? It matters in terms of the feedback we might give you. If you had one misconception instead of another, you should get different feedback than if you had the other.

How do we reconcile this? There’re benefits to simplifications, and risks. We have to be careful to simplify as much as we can, and no simpler. Which isn’t an easy task to undertake. The best recommendation I can make is to be mindful of the risks when you do simplify. Maybe start more broadly, and then winnow down? Explicitly consider the risks and costs as well as the benefits and savings. We’re using learner personas in a project. Many times, these personas can differ on important dimensions, and characterize the audience space in ways that a simple ‘the learner’ can’t capture.

Overall, we want to make sure we’re only using simplifications and categorizations in ways that are both helpful and scrutable. When we do so, we can avoid the wrong bucket lists. That should be our goal, after all.

Filed Under: design, strategy

Confidence and Correctness

5 April 2022 by Clark Leave a Comment

Not surprisingly, I am prompted regularly to ponder new things. (Too often, in the wee hours of the morning…) In this case, I realize I haven’t given a lot of thought to the role of confidence (PDF). It’s a big thing in the system my co-author on an AI and ID paper, Markus Bernhardt, represents, so I realized it’s time to think  about it some more. Here are some thoughts on confidence and correctness.

Confidence by correctnessThe idea is that it matters whether you get it right, or not, and whether you’re confident, or not. That is, they interact (creating the familiar four quadrant model). You can be wrong and unconfident (lo/no), wrong and confident (hi/no),  right and unconfident (lo/yay), and right and confident (hi/yay). Those are arguably importantly different. In particular for what they imply about what sort of intervention makes sense.

I was pondering what this suggests for interventions. I turned it 90 degrees to the left, to put low/no to the left, or beginning spot, and hi/yay to the right, and the other two in-between. Simplified, my view is that if you’re wrong and not confident, you don’t know it. If you’re wrong and believe you know it, you’re at a potential teachable moment. When you’re right, but not confident, you’re ready for more practice. If you’re right and confident, it may be time to move on.

Which suggests, looking back at my previous exploration of worked examples, that the very first thing to do is to provide worked examples if they’re new. At some point, you give them practice. If they get it right but aren’t confident, you give more practice at roughly the same level. If they’re wrong but confident, you give them feedback (and arguably ramp them backwards). Eventually they’re getting it right and confident, and at that point you move on (except for some spaced and varied reactivation).

Assessing confidence is an extra step, but there seems to be a valid reason to incorporate it in your learning design. The benefits of being able to more accurately target your interventions, at least in an adaptive system, suggest that the effort is worth it. That’s my initial thinking on confidence and correctness. What’s yours?

Filed Under: design, meta-learning, strategy

Emphasis and Effort

22 March 2022 by Clark 1 Comment

For reasons that aren’t quite clear (even to me), I was thinking about where, on a continuum, do L&D elements fit? Where does performance support go? Formal learning? Informal learning? I began to think that it depends on what focus you’re thinking of. So here’re some nascent thoughts on emphasis and effort.

To start with, I generally think of formal learning as the starting point. For instance, in thinking about performance & development (as an alternative to learning & development), I put training first. Similarly, in my strategy work, I likewise suggest the first step is to put learning science more central in training. Here, the order is:

  • Formal Learning
  • Performance Support
  • Informal Learning

I’m looking as much as where we typically start. This may well be because training is always the line of first-response (throw training at it!). Also perhaps because it’s familiar (it looks like school).

However, in another cut at it, I started with performance support. Here, I was thinking more about the utility to achieve goals rather than the way L&D allocates resources. That is, from a performer’s perspective, if the answer can be in the world, it should. I can use a tool to achieve my goal rather than have to take a course. Still, taking a pre-digested course is easier than having to work together to collaborate and solve it. Of course, if someone else has the answer, just asking and getting it is easier than working to create an unknown answer. (So, do I need to separate out communication from collaboration? Hmm…) Thus, the list here might be:

  • Performance Support
  • Formal Learning
  • Informal Learning

However, if I look at it from the effort required from L&D, a new order emerges. Here, formal learning is hardest. That is, if you’re doing it right. To successfully get a persistent change in the way someone does something is harder than even facilitating informal learning, and performance support is easiest. Not saying that any are trivial, mind you, designing good job aids isn’t easy, it’s just not as hard as designing a whole course. Then the list comes out like this:

  • Performance Support
  • Informal Learning
  • Formal Learning

I guess there isn’t one answer. To do this successfully, however, requires an understanding how to do all of the above, and then apply as priorities demands. If you’ve expert performers, you’ll do something different than if you have high turnover. If you’re doing something complex, your design strategies may differ from something important. However, you do need to know the tradeoffs in emphasis and effort to make the right calls. Am I missing something important here?

Filed Under: design, strategy

Working with SMEs

15 March 2022 by Clark 1 Comment

In a recent post, I talked about how expertise is compiled away, and the impact on designing learning and documentation. Someone, of course, asked how do you then work with SMEs to get the necessary information. Connie Malamed, one of our recognized research translators, has recently written about getting tacit knowledge, but I also want to address the more usual process.  I thought I’d written about it somewhere, but I can’t find it. So here are some thoughts on working with SMEs.

First, I’ve heard from several folks experienced in this that any one SME may not have both necessary elements. One element is to have a good model to guide the performance. The second elements is the ability to articulate that model. Their solution is to work with SMEs in groups. Guy Wallace (Eppic), Roger Schank (Socratic Arts), and Norman Bier (CMU) have all mentioned to me that they’ve found utility in getting SMEs together as a group and having that knowledge negotiation unpack the necessary learnings. They’re all folks worth listening to. You have to manage the process right, of course, but if you can do it, it’s useful.

I suggest that you also want several different types of SMEs. You want not only the top performers, and theoretical experts, you also want just-past novices (also attributable to Guy) and supervisors. Theorists can give you models, while top performers can talk about the practical implications. Novices can let you know what they found hard to understand, and supervisors provide insight into what performers typically do wrong. All are helpful information for different parts of the learning.

Another trick I use is to focus on decisions. I argue that making better decisions will be more important to organizations than the ability to recite knowledge. SMEs do have access to all the knowledge they learned, and it’s easy for them to focus on that. That’s where you get ‘info dump’ courses and bullet point-laden slides. By using decisions as a focus, you cut through the knowledge. “What decisions will they make differently/better as a result of this knowledge?” is a helpful question.

You can use questionnaires as well. Asking specifically about the elements: models, misconceptions, consequences, can be a good preliminary step before you actually talk to them. Or have a template for content for them to fill out. Any guidance and structure helps keep them focused.

Another preparatory step is to create a draft proposal of the information. You’ll likely be getting a dump of PDFs and PPTs). Process that material, and make your first, best, guess. It’s easier to critique than generate, so if you’re willing to be wrong (and why not), you can have them shoot holes in what you did. You’ll have focused on decisions (right?), and they’ll fix it, but you’ll have biased them for action.

Of course, you want to ensure you test for confirmation. You should circulate what you have learned, and get validation. You’ll need to have clear objectives that operationalize your learnings. You then should prototype and test what you’ve developed and see if it actually changes the behavior in useful ways. Ensure that your focus actually leads to the necessary change.

There are other elements you want from SMEs, such as their personal interest. However, it’s critical that you get them to focus on behavior change. It’s not easy, but it’s part of the job. Working with SMEs, correctly, is key to designing learning experiences that address real needs. These are my thoughts, what are yours?

Filed Under: design, strategy

Experts and Explanations

8 March 2022 by Clark 1 Comment

blueprint pencil rulerI’ve been going through several different forms of expert documentation. As a consequence, I’ve been experiencing a lot of the problems with that! Experts have trouble articulating their thinking. This requires some extra work on the part of those who work with them, whether instructional designers, technical writers, editors, whoever. There are some reliable problems with experts and explanations that are worth reviewing.

The start of the problem is that the way we acquire expertise is to take our conscious thinking and automatize it, basically. We want to make our thinking so automatic that we no longer have to consciously think about it. So, we basically compile it away. Which creates a problem. For one, what we process into memory may not bear a close resemblance to what we have heard and applied. That is, the semantic language we use to guide our practice and internalize may not be what we store as we automate it.

It’s also the case that we lose access to that compiled away expertise. There’s evidence of this, for one from the results of research by the Cognitive Technology group at the University of Southern California showing experts can’t access about 70% of what they do! Another piece of evidence is the widespread failure of so-called ‘expert systems’ in the 80s, resulting in the AI winter. Whether the locus of the problem is in what actually gets stored, or access to it, the result is that what we were told to do, and say we do, may not actually be close to what we actually do.

Another problem is that experts also lose touch with what they grappled with as novices. What they take for granted isn’t even on the radar of novices. So it’s difficult to get them to provide good support for acquiring skills or understanding. Their attempts at explanations for reference of instruction fail.

All told, this leads to systematic gaps in content. I’ve been seeing this manifest in explanations that may say what to do, but not why or how. There may be a lack of examples, and the thinking behind the examples I do see isn’t there. There’s also a lack of visual support. They’re not including diagrams when it’s conceptual relationships that need understanding. They’re also not including images when context is needed. They shouldn’t necessarily be blamed, because they don’t need the support and can’t even imagine that others do!

It’s clear that experts should not be the ones doing the explanations. They’re experts, and they have valuable input, but there needs to be a process to avoid these problems. We need tech writers, IDs, and others to work with experts to get this right. Too often we see experts being tasked with doing the explanations, and we live with the consequences.

What to do? One step is to let experts know that their expertise is in their domain, but the expertise in extracting that expertise and presenting it lies in others. To do so convincingly, you’ll need the science about why. For another, know techniques to unearth that underlying thinking. Also allow time in your schedule for this to happen. Don’t think the SME can just give you information; you’ll have to process what you get to rearrange it into something useful. You may also need some sticks and carrots.

As I wrestle with the outputs of experts, here’s my plea. There are wonderful ways experts and explanations can work out, but don’t take it for granted. Don’t give experts the job of communicating to anyone but other experts, or to experts on working with experts to get explanations. Fair enough?

Filed Under: design, meta-learning, strategy

Examples before practice

1 March 2022 by Clark 2 Comments

I’ve been wrong before, and I’ll be wrong again, and that’s ok <warning: link is NSFW>. It’s like with science: if you change your mind, you weren’t lying before, you’ve learned more now. So I’ve been wrong about the emphasis between practice and examples. What I’ve learned is that, in general, practice isn’t the only area of importance, and the benefits of examples before practice.

So, as part of the Learning Development Accelerator‘s YOK (You Oughta Know) series, I got the chance to interview John Sweller. I’ve known John, I’m very honored to say, from my days at UNSW. I was aware of his reputation as a cog sci luminary, but he also turned out to be a really nice person. He’s the originator of Cognitive Load Theory (CLT), and he was kind enough to agree to talk about it.

As background, he’s tapped into David Geary’s biologically primary and biologically secondary learning. The core idea is that some things we’ve evolved to learn, like speaking. Then there are things we’ve developed intellectually, like reading and writing, that aren’t natural. Instruction is to assist us to acquire the latter. The latter typically has high ‘element interactivity’, whereby there are complex interrelationships to master. That is, it’s complex.

CLT posits that we have limited cognitive capacity, and overwhelming that capacity interferes with learning. The model talks about two types of load. The first is intrinsic load, that implied by the learning task. The second is extrinsic load, coming from additional factors in the particular situation. The premise is that learning complex things (biologically secondary) has such a high intrinsic load that we really need to focus on managing load so we can gradually acquire the entailed relationships.

There are a number of implications of CLT, but one is about the value of worked examples. An important element is showing the thinking behind the steps. A second empirical result is that worked examples are better than practice! At least, initially, for novices. Yet this upends one of my recommendations, which is generally that the most important thing we can do to improve our learning is focus on better practice. I still believe that, but now with the caveat after worked examples. 

Now, he didn’t tell us when that happens, e.g. when you switch from worked examples to practice. However, like the answer to how much spacing needed for the spaced practice effect, I suspect the answer is ‘it depends’. There’s the ‘expertise reversal’ effect that says as you gain experience, the value of worked examples falls and the value of practice raises. That point, I’d suggest, is dependent on the prior knowledge of the learners, the complexity of the material, the scope, and more.

I’m now recommending, particularly for new material, that improving the learning outcomes includes meaningful practice after quality worked examples. That’s my new, better, understanding. Make sense?

As an aside, I talked about CLT in my most recent book, on learning science, with a diagram. In it, I only included intrinsic and extrinsic, as those two seemed critical, yet the classic theory also includes germane intrinsic load. One of the audience members asked him about that, and John opined that he probably needn’t have included germane. Vindication!

Filed Under: design, meta-learning

Good and bad advice all in one!

22 February 2022 by Clark 2 Comments

I was asked to go to read an article and weigh in. First, please don’t do this if you don’t know me. However, that’s not the topic here, instead, I want to comment on the article. Realize that if you ask me to read an article, you’re opening yourself up to my opinion, good or bad. This one’s interesting, because it’s both. Then the question is how do you deal with good and bad advice all in one.

This article is about microlearning. If you’ve been paying attention (and there’s no reason you should be), I’ve gone off on the term before. I think it’s used loosely, and that’s a problem because there are separate meanings, which require separate designs, and not distinguishing them means it’s not clear you know what you’re talking about. (If someone uses the term, I’m liable to ask which they mean! You might do the same.).

This article starts out saying that 3-5 minute videos are not microlearning. I have to agree with that. However, the author then goes on to document 15 points that are important about microlearning. I’ll give credit for the admission that there’s no claim that this a necessary and complete set. Then, unfortunately, I also have to remove credit for providing no data to support the claims!  Thus, we have to evaluate each on it’s own merits.  Sorry, but I kinda prefer some sort of evidence, rather than a ‘self-evident’ fallback.

For instance, there’s a claim for brevity. I’ve liked the admonition (e.g. by JD Dillon) that microlearning should be no longer, and no shorter, than necessary. However, there’s also a claim here that it should be “3 – 10 minutes of attention span”. Why? What determines this? Human attention is complex, and we can disappear into novels, or films, or games, for hours. Yes, “Time for learning is a critical derailer”, but…it’s a factor of how important, complex, and costly if wrong the topic is. There’s no one magic guideline.

The advice continues in this frame: there’re calls for simplicity, minimalism, etc. Most of these are good principles, when appropriately constrained. However, arbitrary calls for “one concept at a time is the golden rule” isn’t necessarily right, and isn’t based on anything other than “our brains need time for processing”. Yes, that’s what automation is about, but to build chunks for short term memory, we have to activate things in juxtaposition. Is that one concept? It’s too vague.

However, it could be tolerated if some of the advice didn’t fall prey to fallacious reasoning. So, for instance, the call for gamification leans into “Millennials and Gen Z workforce” claims. This is a myth. Gamification itself is already dubious, and using a bad basis as an assumed foundation exacerbates the problem. There are other problems as well. For one, automatically assuming social is useful is a mistake. Tying competition into the need to compete is a facile suggestion. Using terms like ‘horde’ and ‘herd’ actually feels demeaning to the value of community. A bald statement like “Numbers speak louder than words!” similarly seems to suggest that marketing trumps matter. I don’t agree.

Overall, this article is a mixed bag. So then the question arises, how do you rate it? What do you do? Obviously, I had to take it apart. The desire for a comment isn’t sufficient to address a complex suite of decent principles mixed up with bad advice and justified (if at all) on false premises. I have to say that this isn’t worth your time. There’s better advice to be had, including on microlearning. In general, I’ll suggest that if there’s good and bad advice all in one, it’s overall bad. Caveat emptor!

Filed Under: design, meta-learning, strategy

Learning or Performance Strategy

1 February 2022 by Clark 1 Comment

Of late, I’m working in a couple of engagements where the issue of learning and performance strategy have come up. It has prompted some thoughts both on my part and the part of my clients. I think it’s worth laying out some of the issues and thinking, and of course I welcome your thoughts. So here are some reflections on whether to use learning or performance strategy as an organizing concept.

In one case, an organization decreed that they needed a learning strategy. Taken with my backwards design diagram from the learning science book, I was tasked with determining what that means. In this case, the audience can’t be mandated with classes or tutorials. So really, the only options are to support performance in the moment and develop them over time. Thus we focus on job aids and examples. I think of it as a ‘performance strategy’, not a learning one.

In the other case, an organization is executing on a shift from a training philosophy to a performance focus. Which of course I laud, but the powers-that-be expect it to yield less training without much other change. Here I’m pushing for performance support, and the thinking is largely welcome. However, it’s a mindset shift for a group that previous was developing training.

I general, I support thinking that goes beyond the course, and for the optimal execution side of a full ecosystem, you want to look at outcomes and let that drive you. It includes performance consulting, so you’re applying the right solution to performance gaps, not the convenient one (read: ‘courses’ ;). Thus, I think it makes more sense to talk performance strategy than learning one.

Even then, the question becomes what does such a strategy really entail, whether learning or performance. Really, it’s about having a plan in place to systematically prioritize needs and address them in effective ways. It’s not just design processes that reflect evidence-informed principles, though it includes that. It’s also, however, ways to identify and track problems, attach organizational costs and solution costs, and choose where to invest resources. It includes front-end analysis, but also ongoing-monitoring.

It also involves other elements. For one, the technology to hand; what solutions are in use and ensuring a process of ongoing reviews. This includes both formal learning tools including the LMS and LXP, but also informal learning tools such as social media platforms and collaborative documents. Another issue is management: lifecycle monitoring, ownership, and costs.

There’s a lot that goes into it, but being strategic about your approach keeps you from just being tactical and missing the forest for the trees. A lot of L&D is reactive, and I am suggesting that L&D needs to be come proactive. This includes going from courses to performance, as a first step. The next step is to facilitating informal learning and driving innovation in the organization. Associated elements include meaningful measurement and truly understanding how we learn for a firm basis upon which to ground both formal and informal learning. Those are my thoughts a learning or performance strategy, what am I missing?

Filed Under: design, social, strategy

What makes a good book?

25 January 2022 by Clark Leave a Comment

I was in contact with a person about a potential book, and she followed up with an interesting question: what’s the vision I have for publishing? She was looking for what I thought was a good book. Of course, I hadn’t really articulated it! I responded, but thought I should share my thinking with you as well. In particular, to get your thoughts!  So, what makes a good book? (I’m talking non-fiction here, of course.)

My first response was that I like books that take a sensible approach to a subject. That is, they start where the learner is and get them realizing this is an important topic. Then the book walks them through the thinking with models and examples. Ultimately, a book should leave them equipped to do new things. In a sense, it’s the author leading the reader through a narrative that leaves them with a different and valuable view of the world.

I think these books can take different forms. Some shake up your world view with new perspectives, so for example Don Norman’s Design of Everyday Things or Todd Rose’s The End of Average. Another types are ones that provide deep coverage of an important topic, such as Patti Shank’s Write Better Multiple-Choice Questions. A third type are ones that lead you through a process, such as Cathy Moore’s Map It. These are rough characterizations, that may not be mutually exclusive, but each can be done to fit the description above.

To me the necessary elements are that it’s readable, authoritative, and worthwhile. That is, first there’s a narrative flow that makes it easy to process. For instance, Annie Murphy Paul’s The Extended Mind takes a journalistic approach to important phenomena. Also, a book needs an evidence-base, grounding in documented experience and/or science. It can re-spin topics (I’m thinking here about Lisa Feldman Barrett’s How Emotions Are Made), but must have a viable reinterpretation. Finally, it has to be something that’s worth covering. That may differ by reader, but it has to be applicable to a field. You should leave with a new perspective and potentially new capabilities.

That’s what came off the top of my head. What am I missing in what makes a good book?

Filed Under: design

The Performance Ecosystem and L&D

11 January 2022 by Clark 2 Comments

On LinkedIn recently, a survey in a post asked whether L&D should simply become performance consulting (Y/N). In the ensuing discussion, a comment was made that the binary discussion was flawed, and that a richer picture was possible. I was extremely pleased when she referred to my Revolutionize Learning & Development book, and posted a diagram from it. I backed her comment, but it occurs to me that there’s more here, and of course I have a connection. So here’re some thoughts on the Performance Ecosystem and L&D.

To start, she cited how I wanted to move to Performance and Development. Indeed, I’ve posted about it, and included a diagram. In it, performance consulting is represented, but as she noticed, there’s more. I think performance consulting is great, but…it’s not everything. To me, it only addresses the ‘optimal execution’ side of the picture, and ignores the ‘continual innovation’ opportunity.

To be fair, suggesting that L&D take responsibility for informal learning could be considered a stretch. My argument is simply that informal learning has practices and policies that can optimize outcomes, and that it’s a necessary component of success going forward. (I note that problem-solving, design, research, and innovation all start without a known answer, so they’re learning too!) It’s not necessarily L&D’s role, but who else (should) know more about learning?

So, innovation is an opportunity. A big one, I suggest. It’s a chance to move to the most valuable role in the organization, going forward. Orgs need to innovate, and facilitating the best innovation is going to be a critical role. Why not L&D? Yes, we have to get out of our comfort zone, start working with other business units, and most importantly know learning. So? We should anyway!

The infrastructure necessary is what I call the performance ecosystem. It’s about formal learning, but also more. That includes social, and information and learning resources. It includes facilitation as well as performance interventions. It’s about technology, but how to use it in ways that align with our brains.

The interesting issue for me is how to awaken this awareness. I suggest mobile is a gateway to the appropriate thinking. I wrote about mobile before writing the Revolution book (as my then-publisher required), but even there I laid out the case how mobile was not (just) about formal learning. Indeed, when you look at the way people use mobile, it’s very different. It’s also a digital platform, which means that it supports multiple outcomes.

Thus, mobile thinking is a way to break through the mindset of courses, and start looking at the bigger picture of technology supporting how we think, work, and learn to the success of our organizations. Which is why I’m happy to say that I’ll again be running the mobile course with Allen Academy, starting next week. Through 18 Jan, they’re offering this as a two-fer, so you get both the mobile and the learning science course for one low price! Together, you’re addressing my silly clip about L&D, both doing courses well and going beyond them.

If you want to get your mind around the performance ecosystem and L&D, I suggest that mobile learning is a effective vehicle. You get both some deep advice about mobile, but it also generalizes to digital technology overall. The course itself looks at formal learning, performance support, informal learning, and more, as well as strategic issues. Coupled with learning science, this is a real grounding in the most important opportunities and necessities facing L&D today. Whether you call it P&D or L&D, these are core concepts. Hope to see you there!

 

Filed Under: design

  • « Previous Page
  • 1
  • 2
  • 3
  • 4
  • …
  • 69
  • Next Page »

Clark Quinn

The Company

Search

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

Blogroll

  • Bamboo Project
  • Charles Jennings
  • Clive on Learning
  • Communication Nation
  • Conversations
  • Corporate eLearning Development
  • Dave’s Whiteboard
  • Donald Taylor
  • e-Clippings
  • eeLearning
  • Eide NeuroLearning
  • eLearn Mag
  • eLearning Post
  • eLearning RoadTrip
  • eLearning Technology
  • eLearnSpace
  • Guild Research
  • Half an Hour
  • Here Comes Everybody
  • Informal Learning
  • Internet Time
  • Janet Clary
  • Kapp Notes
  • Karyn Romeis
  • Lars is Learning
  • Learning Circuits Blog
  • Learning Matters
  • Learning Visions
  • Leverage Innovation
  • Marcia Conner
  • Middle-earth
  • mLearnopedia
  • Nancy White
  • Performance Support Blog
  • Plan B
  • Sky’s Blog
  • Sociate
  • Value Networks
  • Will at Work Learning
  • WriteTech

License

Previous Posts

  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

Copyright © 2022 · Agency Pro on Genesis Framework · WordPress · Log in