Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Is “Workflow Learning” a myth?

24 September 2024 by Clark 5 Comments

There’s been a lot of talk, of late, about workflow learning. To be fair, Jay Cross was talking about learning in the flow of work way back in the late 1990s, but the idea has been recently suborned and become current. Yet, the question remains whether it’s real or a mislabeling (something I’m kind of  anal about, see microlearning). So, I think it’s worth unpacking the concept to see what’s there (and what may not be). Is workflow learning a myth?

To start, the notion is that it’s learning at the moment of need. Which sounds good. Yet, do we really need learning? The idea Jay pointed to in his book Informal Learning, was talking about Gloria Gery’s work on helping people in the moment. Which is good! But is it learning? Gloria was really talking about performance support, where we’re looking to overcome our cognitive limitations. In particular, memory, and putting the information into the world instead of in the head. Which isn’t learning! It’s valuable, and we don’t do it enough, but it’s not learning.

Why? Well, because learning requires action and reflection. The latter can just be thinking about the implications, or in Harold Jarche’s Personal Knowledge Mastery model, it’s about experimenting and representing. In formal learning, of course, it’s feedback. I’ve argued we could do that, by providing just a thin layer on top of our performance support. However, I’ve never seen same!  So,  you’re going to do, and then not learn. Okay, if it’s biologically primary (something we’re wired to learn, like speaking), you’re liable to pick it up over time, but if it’s biologically secondary (something we’ve created and aren’t tuned for, e.g. reading) I’d suggest it’s less likely. Again, performance is the goal. Though learning can be useful to support comprehending context and  making complex decisions, what we’re good at.

What is problematic is the notion of workflow and reflection in conjunction. Simply, if you’re reflecting, you’re by definition out of the workflow! You’re not performing, you’re stopping and thinking. Which is valuable, but not ‘flow’. Sure, I may be overly focused on workflow being in the ‘zone’, acting instead of thinking, but that, to me, is really the notion. Learning happens when you stop and contemplate and/or collaborate.

So, if you want to define workflow to include the reflection and thoughtful work, then there is such a thing. But I wonder if it’s more useful to separate out the reflection as things to value, facilitate, and develop. It’s not like we’re born with good reflection practices, or we wouldn’t need to do research on the value of concept mapping and sketch noting and how it’s better than highlighting. So being clear about the phases of work and how to do them best seems to me to be worthwhile.

Look, we should use performance support where we can. It’s typically cheaper and more effective than trying to put information into the head. We should also consider adding some learning content on top of performance support in times where people knowing why we’re doing it as much as what we should do is helpful. Learning should be used when it’s the best solution, of course. But we should be clear about what we’re doing.

I can see arguments why talking about workflow learning is good. It may be a way to get those not in our field to think about performance support. I can also see why it’s bad, leading us into the mistaken belief that we can learn while we do without breaking up our actions. I don’t have a definitive answer to “is workflow learning a myth” (so this would be an addition to the ‘misconceptions’ section of my myths book ;). What I think is important, however, is to unpack the concepts, so at least we’re clear about what learning is, about what workflow is, and when we should do either. Thoughts?

Diagramming Feedback

10 September 2024 by Clark 1 Comment

I’ve wrestled with the concept of feedback for a while. I think Valerie Shute’s summary she did for the ETS is superb, BTW. And, of course, I select a pragmatic subset for the purposes of communicating the essential elements. However, it’s always been a list of important items. Which isn’t how I want to do it in a webinar. I was thinking about it today, and I began to get an idea. So, I started diagramming feedback.

A person generates output, and the model is used to determine correctness or not, and then either the incorrect is shown why to be so, and in either case then the right answer. What are the essential elements of feedback? Well, it should be on the performance, not the individual. It should be model-based, in that you should be using models to explain how to perform, showing examples of the model being used in context, and then asking the learner to use them. The feedback, then, uses the model to explain why what went right, or what went wrong. Also, it should be minimal other than that.

So, here I tried to show that the individual (or group, hmm) produces output. That output is evaluated by the model to ascertain correctness, or not. (Not the individual!) If the answer’s wrong, you say why, and then the right answer. If it’s right, you just reinforce the right answer.

Of course, this representation doesn’t convey the minimal aspect. It’s also not clear about using the model in the feedback. Still, so far it’s a representation I can talk to. So, this is my first stab at diagramming feedback. I welcome same!

The Damage Done

20 August 2024 by Clark Leave a Comment

There’ve been a recent discussions about misinformation. One question is, what does it hurt? When you consider myths, superstitions, and misconceptions (the breakdown in my book on L&D problems), what can arise? Let’s talk about the damage done.

So, let’s start with myths. These, I claim, are things that have been shown not to have value by empirical research. There are studies that have examined these claims, and found them to not have data to support them. For instance, accommodating learning styles is a waste. Yes, we know people differ in learning, but we don’t have a reliable base. Moreover, people’s choices to work for (or against) their style don’t make a difference in their learning. Some of the instruments are theoretically flawed as well as psychometrically invalid.

What’s the harm? I’ll suggest several ways in which myths harm us. For one, they can cause people to spend resources (money & time)  addressing them that won’t have an impact. It’s a waste! We can also characterize people in ways that limit them; for instance if they think they learn in a particular way, they may avoid a topic or invest effort in an inappropriate way to learn it. Investing in unproven approaches also perpetuates them, propagating the beliefs to others.

Superstitions, as I define them, are beliefs nobody would claim to believe, yet somehow persist in our practices. For instance, few will claim to believe that telling is sufficient to achieve behavior change. Yet, we continue to see information presentation and knowledge test, such as “awareness” training. Why? This is a waste of effort. There aren’t outcomes from these approaches. Typically, they are legacies of expectations from previous decades, yet business practices haven’t been updated. Still, to the extent that we continue these practices, even while decrying them, we’re again wasting time and money. Maybe we tick boxes and make people happy, but we can (and should) do better.

The final category is misconceptions. These are beliefs that some hold, and others decry. They aren’t invalid, but they only make sense in certain circumstances. I suggest that those who defy them don’t have the need, and those who tout them are in the appropriate circumstance. What matters is understanding when they make sense, and then using them, or not, appropriately. If you avoid them when they make sense, you may make your life harder. If you adopt them when they’re not appropriate,  you could make mistakes or waste money.

At the end of the day, the damage done is the cost of wasting money and time. Understanding the choices is critical. To do so best, you can and should understand the underlying cognitive and learning sciences. You should also track the recognized translators of research into practice who can guide you without you having to read the original academese. To be professional in our practice, we need to know and use what’s known, and avoid what’s dubious. Please!

Failing right

13 August 2024 by Clark Leave a Comment

I’ve been reading Amy Edmondson’s Right Kind of Wrong, and I have to say it’s very worthwhile. I’ve been a fan of hers since her book Teaming introduced me to the notion of psychological safety. It’s an element I’ve incorporated into my thinking about innovation and learning. This new book talks about how we have beliefs about making mistakes, and how we can, and should, be failing right.

In this book, she uses examples to vibrantly talk about failure, and how it’s an important part of life. She goes on to talk about different types of failure, and the situations they can occur in, creating a matrix. This allows us to look at when and how to fail. Along the way, she talks about self, situational, and systemic failure.

One of the important takeaways, which echoes a point Donald Norman made in Design of Everyday Things, is that failure may not be our fault! Too often, bad design allows failure, instead of preventing it. Moreover, she makes the point that we have a bad attitude towards failure, not recognizing that it’s not only part of life, but can be valuable!  When we make a mistake, and reflect, we can learn.

Of course, there are simple mistakes. I note that there’s some randomness in our architecture, e.g. To Err is Human. But also, there can be factors we haven’t accounted for, like bad design, or things out of our control. At the most significant level, she talks about complex systems, and how they can react in unpredictable ways. Along the way, what counts as ‘intelligent’ failure is made clear. Some fails are smart, others are not justified.

She also talks about how experiments are necessary to understand new domains. This is, in my mind, about innovation. She also gives prescriptions, at both the personal and org level. Dr. Edmondson talks about the value of persistence, taking ‘good enough’, but also not taking it too personally. She also talks about sharing, as Jane Bozarth would say: Show Your Work. This is for both calling out problems and sharing failure.

Along with a minor quibble about the order in which she presents a couple things, a more prominent miss, to me, is a small shift in focus. She talks about celebrating the ‘pivot’, where you change direction. However,  I’d more specifically celebrate the learning. That is, whether we pivot or not, we say that learning something is good. Of course, I’m biased towards learning, but I’d rather celebrate the learning. Yes, we possibly would do something different, and celebrating action is good, but sharing the learning means others can learn from it too. Maybe I’m being too pedantic.

Still, this is another in her series of books exploring organizational improvement and putting useful tools into our hands. We can, and should expect to not get everything right all the time, and instead should be focusing on failing right. Recommended.

 

The easy answer

16 July 2024 by Clark Leave a Comment

In working on something, I’m looking at the likely steps people take. Of course, I’m listing them from easiest to most useful (with the hope that folks understand they should take the latter). However, it’s making me think that, too often, people are looking at the easy answer, not the most accurate one. Because they really don’t know the problem. When does the easy answer make sense? Are we letting ourselves off the hook too much?

So, for instance, in learning we really should do analysis when someone asks for something. “We need a course on X.” “Ok, what tells you that you need this, and how will we know when it’s worked?” In a quick family convo, we established that this sort of un-analytical request is made all the time:

  • “Why isn’t my plant blooming?” (It’s not the season.)
  • “Fix this code.” (The input’s broken, not the code.)
  • …

Yet, people actually don’t do this up-front analysis. Why? It’s harder, it takes more time, it slows things down, it costs more. Besides, we know what the problem is.

DivergeConvergeProblemSolutionExcept, we don’t know what the problem is. Too often, the question or request is making some assumptions about the state of the world that may not be true. It may be the right answer, but it may not. Ensuring that you’ve identified the problem correctly is the first part of the design process, and you should diverge on exploration before you converge on a solution. That’s the double diamond, where you first explore the problem, before you explore a solution.

Perhaps counter-intuitively, this is more efficient. Why? Because you’re not expending resources solving the wrong problem. Are you sure you’ve gotten it right? How do you know when to take the easier path? If you know the answer you need, you’re better equipped to choose the level of solution you need. If you don’t know the question, however, and make assumptions about the root cause, you can go off the rails. And, end up spending effort you didn’t need to.

Look, I live in the real world. I have to take shortcuts (heck, I’m lazy ;). And I do. However, I like to do that when I know the answer, and know that the outcome is good enough to meet the need. I’ll go for the easy answer, if I know it’ll solve the problem well enough. But I can’t if I don’t know the question or problem, and just assume. And we know what happens when we ass-u-me.

A Learning Science Conference?

9 July 2024 by Clark Leave a Comment

learning science conference 2024 banner: "Online. Asynchronous & Live Sessions"In our field of learning design (aka instructional design), it’s too frequently the case that folks don’t actually know the underlying learning science that guides processes, policies, and practices. Is this a problem? If it is, what is the remedy?

Consider that you wouldn’t want an electrician that didn’t understand the principles of electricity. Such a person might not understand, for instance, the importance of grounding, leavning open the possibility of burning down the house.

So, too, with learning. If you don’t understand learning science, you might not understand why learning styles is a waste of money, the lack of value of information alone, nor that you should make alternatives to the right answer reflect typical misconceptions. There’s lots more: models, context, and feedback are also included in the topics that most folks don’t understand the nuances of.

If you don’t understand learning science, you waste money. You are likely to design ineffective learning, wasting time and effort. Or you might expend unnecessary effort on things that don’t have an impact. Overall, it’s a path to the poorhouse.

Of course, there are other reasons why we don’t have the impact we should: mismatched expectations on costs and time, SME recalcitrance and hubris, and more. Still, you’re better equipped to counter these problems if you can justify your stance from sound research.

The way to address this, of course, also isn’t necessarily easy. You might read a book, though some can mislead you. And, you still don’t get answers if you have questions. Or, you could pay for a degree, but those can be quite expensive and ineffective. Too frequently they spend time on process and not enough on principles.

There’s another option, one we’re providing. What if you could get the core essentials curated for their relevance? Further, this content is provided for you asynchronously, buttressed by the opportunity for meaningful interaction, in a tight time frame (at different times depending on your location)? Then, the presentation is by some of the most important names in the field, individuals who’ve reliably demonstrated an ability to translate academic research into comprehensible principles? And, finally, this is delivered at an appropriate cost? Does that sound like a valuable proposition?

I’d like to invite you to the Learning Science Conference, put on by the Learning Development Accelerator. Faculty already agreed include Ruth Clark (co-author of eLearning & The Science of Instruction), myself (author of Learning Science for Instructional Designers), Matt Richter (co-director of the Thiagi group), and Nidhi Sachdeva (faculty at University of Toronto). The curriculum covers 9 of the most important elements of learning science including learning, myths and barriers, motivation, informal and social learning, media, and evaluation.

This event is designed to leave you with the foundations necessary to be able to design learning experiences that are both engaging and effective, as well as dealing with the expected roadblocks to success. Frankly, we see little else that’s as comprehensive and practical. We hope to see you there!

Break it down!

2 July 2024 by Clark 2 Comments

jigsaw puzzle piecesIn our LDA Forum, someone posted a question asking about taking Cathy Moore’s Action Mapping for soft skills, like improving team dynamics. Now, they’re specifically asking about a) people with experience, and b) in the context of not-for-profits, so…I’m not a good candidate to respond. However, what it does raise is a more common problem: how do you train things that are more ephemeral. Like, for instance, leadership, or communication? My short answer is “break it down”. What do I mean? Here’re some thoughts, and I welcome feedback!

Many moons ago, I co-wrote a paper on evaluating social media impacts. There are the usual metrics, like ‘engagement’. That is, are people using the system? Of course, for companies charging for their platform, this could be as infrequent as a person accessing it once a month. More practically, however, it should be a person hitting it at least several times a week, or even several times a day! If you’re communicating, cooperating, and collaborating, you really should be interacting at a fair frequency.

I, on the other hand, argued for more detailed implications. If you’re putting it into a sales team, you should expect not only messages, but more success on sales, shorter sales cycles, etc. So you can get more detailed. These days, you can do even more, and have the system actually tag what the messages are about and count them. You can go deeper.

Which is what I think is the answer here. What skills do you want? For an innovation demo with Upside Learning, I argued we should break it down. That includes how to work out loud, and how to provide feedback, and how to run group meetings. (I’m just reading Alex Edman’s May Contain Lies, and it contains a lot of details about how to consider data and evidence.) We can look for more granular evidence. Even for skills like team dynamics, you should be looking at what makes good dynamics. So, things like making it safe yet accountable, providing feedback on behavior not on the person, valuing diversity, etc. There should be specific skills you want to develop, and assess. These, then, become the skills you design your learning to accomplish. You are, basically, creating a curriculum of the various skills that comprise the aggregated topic.

It may be that you assess a priori, and discover that only some are missing in your teams. That upfront analysis should happen regardless, but is too infrequent. The interlocutor here also mentioned the audience complaining about the time for analysis. Yep, that’s a problem. Reckon you have to sell the whole package: analyzing, designing, and evaluating for impact on performance, not just some improvement. Yet, compared to throwing money away? Seems like targeting intervention efforts should be a logical sell. If only we lived in a rational world, eh?

Still, overall, I think that these broad programs break down into specific skills that can be targeted and developed. And, we should. Let’s not get away with vague intentions, explanations, and consequently no outcomes. Let’s do the work, break it down, and develop actual skills. That, at least, is my take, I welcome hearing yours!

Diving or surfacing?

25 June 2024 by Clark Leave a Comment

Bubbles in water with light behindIn my regular questing, one of the phenomena I continue to explore is design. Investigating, for instance, reveals that, contrary to recommendations, designers approach practice more pragmatically. That’s something I’ve been experiencing both in my work with clients and recent endeavors. So, reflecting, are and should folks be diving or surfacing?

The original issue is how designers design. If you look at recommendations, they typically recommend starting at the top level conceptualization and work down, such as Jesse James Garrett’s Information Architecture approach (PDF of the Elements of User Experience; note that he puts the highest level of conceptualization at the bottom and argues to work up). Empirically, however, designers switch between top-down and bottom-up. What do I do?

Well, it of course depends on the project. Many times (and, ideally), I’m brought in early, to help conceptualize the strategy, leveraging learning science, design, organizational context, and more. I tend to lead the project’s top-level description, creating a ‘blueprint’ of where to go. From there, more pragmatic approaches make sense (e.g. bringing in developers). Then, I’m checking on progress, not doing the implementation. I suppose that’s like an architect. That is, my role is to stay at the top-level.

In other instances, I’m doing more. I frequently collaborate with the team to develop a solution. Or, sometimes, I get concrete to help communicate the vision that the blueprint documents. Which,  in working with an unfamiliar team, isn’t unusual. That ‘telepathy’ comes with getting to know folks ;).

In those other instances, I too will find out that pragmatic constraints influence the overarching conceptualization, and work back up to see how the guidelines need to be adapted to account for the particular instance. Or we need to deconnect from the details to remember what our original objective is. This isn’t a problem! In general, we should expect that ongoing development unearths realities that weren’t visible from above, and vice versa. We may have good general principles, (e.g. from learning science), but then we need to adapt them to our circumstances, which are unlikely to exactly match. In general, we need to abstract the best principles, and then de- and re-contextualize.

I find that while it’s harder work to wrestle with the details (more pay for IDs! ;), it’s very worthwhile. What’s developed is better as a result of testing and refining. In fact, this is a good argument about why we should iterate (and build it into our timelines and budgets). It’s hubris to assume that ‘if we build it, it is good’. So, let’s not assume we can either be diving or surfacing, but instead recognize we should cycle between them. Start at the top and work down, but then regularly check back up too!

Learning Debt?

18 June 2024 by Clark Leave a Comment

In our LDA conversation with David Irons, User Experience (UX) Strategist, for our Think Like A…series, he mentioned a concept I hadn’t really considered. The concept is ‘design debt’, as an extension of the idea of ‘tech debt’. I was familiar with the latter, but hadn’t thought of it from the UX side. Nor, the LXD side! Could we have a learning debt?

So, tech debt is that delta between what good technology design would suggest, and what we do to get products out the door. So, for instance, using an algorithm for sorting that’s quicker with small numbers of entries but doesn’t handle volume. The accrued debt only gets paid back once you go back and redesign. Which, too often, doesn’t happen, and the debt accumulates. The problems can mean it’s difficult to expand capabilities, or keep performance from scaling. I think of how Apple OS updates occasionally don’t really add new features but instead fix the internals. (Hasn’t seemed to happen as much lately?)

Design debt is the UX equivalent. We see expedient shortcuts or gaps in the UX design, for instance.  As Ward Cunningham, an agile proponent, says:

Design debt is all the good design concepts of solutions that you skipped in order to reach short-term goals. It’s all the corners you cut during or after the design stage, the moments when somebody said: “Forget it, let’s do it the simpler way, the users will make do.”

It’s a real thing. You may experience it when entering a phone number into a field, and then hear it’s not in the proper format (though there was no prior information about what the required format is). That’s bad design, and could (and should) be fixed.

This could be true in learning, too. We could we have ‘learning debt’. When we make practice (and I should note for previous and future posts that includes any assessment where learners apply the knowledge we’ve provided) about knowledge instead of application of knowledge, for instance, we’re creating a gap between what they’ve learned to do and what they need to do. That’s a problem. Or when we put in content because someone insists it has to be there rather than a designer deciding it’s necessary for the learning. Which adds to cognitive load and undermines learning!

How often do we go back and improve our courses? If we’re offering workshops or some other instruction, we can adapt. When we create elearning, however, we tend to release it and forget it. When I ask audiences if they have any legacy courses that are out of date and unused but still hanging around their LMS, everyone raised their hands. We may update courses whose info has changed, but how many times do we go back and redo asynchronous courses because we’ve tracked and have evidence that it’s not working sufficiently? Yes, I acknowledge it happens, but not often enough. (*cough* We don’t evaluate our courses sufficiently nor appropriately. *cough*)

Ok, so everyone makes tradeoffs. However, which ones should we make? The evidence suggests erring on the side of better practice and less content. Prototyping and testing is another step that we can take to remove debt up front. With UX, lacks in design early on cost more to fix later. We don’t typically go back and fix, but we can and should. Better yet, test and fix before it goes live. Another way to think about it is that learning debt is money wasted. Build, run, and not learn, or build, test, and refine until learning happens?

There are debts we can sustain, and ones we can’t. And shouldn’t. When our learning doesn’t even happen, that’s not sustainable. Our Minimum Viable Product has to be at least viable. Too often, it’s not. Let’s ensure that viable means achieves an outcome, eh? It might not be optimal improvement, or as minimum in time as possible, but at least it’s achieving an outcome. That’s better than releasing a useless product (despite no one knowing). , even if we get paid (internally or externally). What am I missing?

 

Reflecting on adaptive learning technology

11 June 2024 by Clark 1 Comment

My last real job before becoming independent (long story ;) was leading a team developing an adaptive learning platform. The underlying proposition was the basis for a topic I identified as one of my themes. Thinking about it in the current context I realize that there’re some new twists. So here I’m reflecting on adaptive learning technology.

So, my premise for the past couple of decades is to decouple what learners see from how it’s delivered. That is, have discreet learning ‘objects’, and then pull them together to create the experience. I’ve argued elsewhere that the right granularity was by learning role: concepts are separate from examples, from practice, etc. (I had team members participating in the standards process.) The adaptive platform was going to use these learning objects to customize the sequence for different learners. This was both within a particular learning objective, and across a map of the entire task hierarchy.

The way the platform was going to operate was typical in intelligent tutoring systems, with a twist. We had a model of the learner, and a model of the pedagogy, but not an explicit model of expertise. Instead, the expertise was intrinsic to the task hierarchy. This was easier to develop, though unlikely to be as effective. Still, it was scalable, and using good learning science behind the programming, it should do a good job.

Moreover, we were going to then have machine learning, over time, improve the model. With enough people using the system, we would be able to collect data to refine the parameters of the teaching model. We could possibly be collecting valuable learning science evidence as well.

One of the barriers was developing content to our specific model. Yet I believed then, and still now, that if you developed it to a standard, it should be interoperable. (We’re glossing over lots of other inside arguments, such as whether smart object or smart system, how to add parameters, etc.) That was decades ago, and our approach was blindsided by politics and greed (long sordid story best regaled privately over libations). While subsequent systems have used a similar approach (*cough* Knewton *cough*), there’s not an open market, nor does SCORM or xAPI specifically provide the necessary standard.

Artificial intelligence (AI) has changed over time. While evolutionary, it appears revolutionary in what we’ve seen recently. Is there anything there for our purposes? I want to suggest no. Tom Reamy, author of Deep Text, argues that hybrids of symbolic and sub-symbolic AI (generative AI is an instance of the latter) have potential, and that’s what we were doing. Systems trained on the internet or other corpuses of images and/or text aren’t going to provide the necessary guidance. If you had a sufficient quantity of data about learning experiences with the characteristics of your own system, you could do it, but if it exists it’s proprietary.

For adaptive learning about tasks (not knowledge; a performance focus means we’re talking about ‘do’, not know), you need to focus on tasks. That isn’t something AI really understands, as it doesn’t really have a way to comprehend context. You can tell it, but it also doesn’t necessarily know learning science either (ChatGPT can still promote learning styles!). And, I don’t think we have enough training data to train a machine learning system to do a good job of adapting learning. I suppose you could use learning science to generate a training set, but why? Why not just embed it in rules, and have the rules work to generate recommendations (part of our algorithm was a way to handle this)? And, as said, once you start running you will eventually have enough data to start tuning the rules.

Look, I can see using generative AI to provide text, or images, but not sequencing, at least not without a rich model. Can AI generate adaptive plans? I’m skeptical. It can do it for knowledge, for sure, generating a semantic tree. However, I don’t yet see how it can decide what application of that knowledge means, systematically. Happy to be wrong, but until I’m presented with a mechanism, I’m sticking to explicit learning rules. So, where am I wrong?

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok