Learnlets

Secondary

Clark Quinn’s Learnings about Learning

My Personal Knowledge Management Approach

29 March 2022 by Clark Leave a Comment

Last week, in our Learning Development Accelerator You Oughta Know session, we had Harold Jarche as a guest. Harold’s known for many things, but in particular his approach to continual learning. Amongst the things he shared was a collection of others’ approaches. I checked and I hadn’t made a contribution! So with no further ado, here’s my personal knowledge management approach.

First, Harold’s Personal Knowledge Management (PKM) model has three components: seek, sense, and share. Seeking is about information coming in, that is, what you’re looking for and the feeds you track. It can be in any conceivable channel, and one of the important things is that it’s  your seeking. Then, you make sense of what comes in, finding ways to comprehend and make use of it. The final step is to share back out the sense you’ve made. It’s a notion of contributing back. Importantly, it’s not that necessarily anybody consumes what you share, but the fact that you’ve prepared it for others is part of the benefit you receive.

Seek

Most seeking is two-fold, and mine’s no exception. First of all there’s the ‘as needed’ searches for specific information. Here I typically use DuckDuckGo as my search engine, and often end up at Wikipedia. With much experience, I trust it.  If there are multiple hits and not a definitive one, I’ll scan the sources as well as the title, and likely open several. Then I review them until I’m happy.

The second part is the feeds. I have a number of blogs I’m subscribed to. There are also the people I follow on Twitter. On LinkedIn, a while ago I actively removed all my follows on my connections, and only retained ones for folks I trust. As I add new people, I similarly make a selection of those I know to trust, and ones who look interesting from a role, domain, location, or other diversity factor.  An important element is to be active in selecting feeds, and even review your selections from time to time.

Sense

Sometimes, I’m looking for a specific answer, and it gets put into my work. Other times, it’s about processing something I’ve come across. It may lead me to diagramming, or writing up something, frequently both (as here). Diagramming is about trying to come to grip with conceptual relationships by mapping them to spatial ones. Writing is about creating a narrative around it.

Another thing I do is apply knowledge, that is put it into action. This can be in a design, or in writing something up. This is different than just writing, for me. That is, I’m not just explaining it, I’m using it in a solution.

Share

To share, I do things like blog, do presentations and workshop, and write books. I also write articles, and sometimes just RT. Harold mentioned, during the session, that sharing should be more than just passing it on, but also adding value. However, I do sometimes just like or share things, thinking spreading it to a different audience is value. If you’re not too prolific in your output, I reckon that the selected shares add value. Of course, in general if I pass things on I do try to make a note, such as when sharing someone else’s blog that I thought particularly valuable.

So that’s my process. It’s evolving, of course. We talked about how our approaches have changed; we’ve both dropped the quantity of posts, for instance. We’re also continually updating our tools, too. I’ve previously noted how comments that used to appear on my blog now appear on LinkedIn.

To be fair, it’s also worth noting that this approach scales. So workgroups and communities can do a similar approach to continually processing. Harold’s done it in orgs, and it factors nicely into social learning as well. One attendee immediately thought about how it could be used in training sessions!

So that’s a rough cut at my PKM process. I invite you to reflect on yours, and share it with Harold as well!

I discuss PKM in both my Revolutionize L&D book, and my Learning Science book.

Experts and Explanations

8 March 2022 by Clark 1 Comment

blueprint pencil rulerI’ve been going through several different forms of expert documentation. As a consequence, I’ve been experiencing a lot of the problems with that! Experts have trouble articulating their thinking. This requires some extra work on the part of those who work with them, whether instructional designers, technical writers, editors, whoever. There are some reliable problems with experts and explanations that are worth reviewing.

The start of the problem is that the way we acquire expertise is to take our conscious thinking and automatize it, basically. We want to make our thinking so automatic that we no longer have to consciously think about it. So, we basically compile it away. Which creates a problem. For one, what we process into memory may not bear a close resemblance to what we have heard and applied. That is, the semantic language we use to guide our practice and internalize may not be what we store as we automate it.

It’s also the case that we lose access to that compiled away expertise. There’s evidence of this, for one from the results of research by the Cognitive Technology group at the University of Southern California showing experts can’t access about 70% of what they do! Another piece of evidence is the widespread failure of so-called ‘expert systems’ in the 80s, resulting in the AI winter. Whether the locus of the problem is in what actually gets stored, or access to it, the result is that  what we were told to do, and say we do, may not actually be close to what we actually do.

Another problem is that experts also lose touch with what they grappled with as novices. What they take for granted isn’t even on the radar of novices. So it’s difficult to get them to provide good support for acquiring skills or understanding. Their attempts at explanations for reference of instruction fail.

All told, this leads to systematic gaps in content. I’ve been seeing this manifest in explanations that may say what to do, but not why or how. There may be a lack of examples, and the thinking behind the examples I  do see isn’t there.  There’s also a lack of visual support. They’re not including diagrams when it’s conceptual relationships that need understanding. They’re also not including images when context is needed. They shouldn’t necessarily be blamed, because they don’t need the support and can’t even imagine that others do!

It’s clear that experts should not be the ones doing the explanations. They’re experts, and they have valuable input, but there needs to be a process to avoid these problems. We need tech writers, IDs, and others to work with experts to get this right. Too often we see experts being tasked with doing the explanations, and we live with the consequences.

What to do? One step is to let experts know that their expertise is in their domain, but the expertise in extracting that expertise and presenting it lies in others. To do so convincingly, you’ll need the science about why. For another, know techniques to unearth that underlying thinking. Also allow time in your schedule for this to happen. Don’t think the SME can just give you information; you’ll have to process what you get to rearrange it into something useful. You may also need some sticks and carrots.

As I wrestle with the outputs of experts, here’s my plea. There are wonderful ways experts and explanations can work out, but don’t take it for granted. Don’t give experts the job of communicating to anyone but other experts, or to experts on working with experts to get explanations. Fair enough?

Examples before practice

1 March 2022 by Clark 4 Comments

I’ve been wrong before, and I’ll be wrong again, and that’s ok <warning: link is NSFW>. It’s like with science: if you change your mind, you weren’t lying before, you’ve learned more now. So I’ve been wrong about the emphasis between practice and examples. What I’ve learned is that, in general, practice isn’t the only area of importance, and the benefits of examples before practice.

So, as part of the Learning Development Accelerator‘s YOK (You Oughta Know) series, I got the chance to interview John Sweller. I’ve known John, I’m very honored to say, from my days at UNSW. I was aware of his reputation as a cog sci luminary, but he also turned out to be a really  nice person. He’s the originator of Cognitive Load Theory (CLT), and he was kind enough to agree to talk about it.

As background, he’s tapped into David Geary’s biologically primary and biologically secondary learning. The core idea is that some things we’ve evolved to learn, like speaking. Then there are things we’ve developed intellectually, like reading and writing, that aren’t natural. Instruction is to assist us to acquire the latter.  The latter typically has high ‘element interactivity’, whereby there are complex interrelationships to master. That is, it’s complex.

CLT posits that we have limited cognitive capacity, and overwhelming that capacity interferes with learning. The model talks about two types of load. The first is intrinsic load, that implied by the learning task. The second is extrinsic load, coming from additional factors in the particular situation. The premise is that learning complex things (biologically secondary) has such a high intrinsic load that we really need to focus on managing load so we can gradually acquire the entailed relationships.

There are a number of implications of CLT, but one is about the value of worked examples. An important element is showing the thinking  behind the steps. A second empirical result is that worked examples  are better than practice! At least, initially, for novices. Yet this upends one of my recommendations, which is generally that the most important thing we can do to improve our learning is focus on better practice. I still believe that, but now with the caveat after worked examples.  

Now, he didn’t tell us when that happens, e.g. when you switch from worked examples to practice. However, like the answer to how much spacing needed for the spaced practice effect, I suspect the answer is ‘it depends’. There’s the ‘expertise reversal’ effect that says as you gain experience, the value of worked examples falls and the value of practice raises. That point, I’d suggest, is dependent on the prior knowledge of the learners, the complexity of the material, the scope, and more.

I’m now recommending, particularly for new material, that improving the learning outcomes includes meaningful practice  after  quality worked examples. That’s my new, better, understanding. Make sense?

As an aside, I talked about CLT in my most recent book, on learning science, with a diagram. In it, I only included intrinsic and extrinsic, as those two seemed critical, yet the classic theory also includes  germane intrinsic load. One of the audience members asked him about that, and John opined that he probably needn’t have included germane. Vindication!

Good and bad advice all in one!

22 February 2022 by Clark 2 Comments

I was asked to go to read an article and weigh in. First, please don’t do this if you don’t know me. However, that’s not the topic here, instead, I want to comment on the article. Realize that if you ask me to read an article, you’re opening yourself up to my opinion, good  or bad. This one’s interesting, because it’s both. Then the question is how do you deal with good and bad advice all in one.

This article is about microlearning. If you’ve been paying attention (and there’s no reason you should be), I’ve gone off on the term before. I think it’s used loosely, and that’s a problem because there are separate meanings, which require separate designs, and not distinguishing them means it’s not clear you know what you’re talking about. (If someone uses the term, I’m liable to ask which they mean! You might do the same.).

This article starts out saying that 3-5 minute videos are  not  microlearning. I have to agree with that. However, the author then goes on to document 15 points that are important about microlearning. I’ll give credit for the admission that there’s no claim that this a necessary and complete set. Then, unfortunately, I also have to remove credit for providing no data to  support the claims!  Thus, we have to evaluate each on it’s own merits.  Sorry, but I kinda prefer some sort of evidence, rather than a ‘self-evident’ fallback.

For instance, there’s a claim for brevity. I’ve liked the admonition (e.g. by JD Dillon) that microlearning should be no longer, and no shorter, than necessary. However, there’s also a claim here that it should be “3 – 10 minutes of attention span”. Why? What determines this? Human attention is complex, and we can disappear into novels, or films, or games, for hours. Yes, “Time for learning is a critical derailer”, but…it’s a factor of how important, complex, and costly if wrong the topic is. There’s no one magic guideline.

The advice continues in this frame: there’re calls for simplicity, minimalism, etc. Most of these are good principles,  when appropriately constrained. However, arbitrary calls for “one concept at a time is the golden rule”  isn’t necessarily right, and isn’t based on anything other than “our brains need time for processing”. Yes, that’s what automation is about, but to build chunks for short term memory, we have to activate things in juxtaposition. Is that one concept? It’s too vague.

However, it could be tolerated if some of the advice didn’t fall prey to fallacious reasoning. So, for instance, the call for gamification leans into “Millennials and Gen Z workforce” claims. This is a myth. Gamification itself is already dubious, and using a bad basis as an assumed foundation exacerbates the problem.  There are other problems as well. For one, automatically assuming social is useful is a mistake. Tying competition into the need to compete is a facile suggestion. Using terms like ‘horde’ and ‘herd’ actually feels demeaning to the value of community. A bald statement like “Numbers speak louder than words!” similarly seems to suggest that marketing trumps matter. I don’t agree.

Overall, this article is a mixed bag. So then the question arises, how do you rate it? What do you do? Obviously, I had to take it apart. The desire for a comment isn’t sufficient to address a complex suite of decent principles mixed up with bad advice and justified (if at all) on false premises. I have to say that this isn’t worth your time. There’s better advice to be had, including on microlearning. In general, I’ll suggest that if there’s good and bad advice all in one, it’s overall bad. Caveat emptor!

Generic Thinking Skills?

15 February 2022 by Clark 3 Comments

Recently, a colleague asked a few of us about our views on critical thinking skills. This is actually a contentious topic. There are broad claims of the need for them, increasingly, even showing up in job advertisements. On the other hand, researchers and others have weighed in against them, saying that expertise is the only lever. I tend to lump critical thinking skills in with the broader issue of generic thinking skills, so what are the issues?

Upfront, I’ll admit that I like the concept of generic thinking skills. Say, for instance,  learning-to-learn skills. That is, domain independent skills that lead to better approaches. It seems to make sense that, in the absence of specific knowledge, some general approaches are more useful than others. For instance, faced with a new domain, I’d be inclined to expect that systematic experimentation and observation would be better than random trial and error.

On the other hand, prominent psychologists like John Sweller and Paul Kirschner have said that domain-specific skills are the only way to bet. There is significant evidence that expertise matters in successful approaches to problem-solving, and others. While we have some innate skills for domains that are biologically primary, learning in other domains requires expertise.

Is there, then, any evidence for generic skills? Based on Micki Chi’s work on the value of self-explanation, Kate Bielaczyc and others have found that instruction on systematically explaining steps in examples help, across domains. In my own Ph.D. thesis, I trained folks on analogical reasoning skills, and found improvement (for component skills that weren’t a) already ceilinged or b) were perceived to be immutable, across different problem types.

How, then, do we reconcile these conflicting viewpoints? My (self- :) explanation is that it’s a matter of degree, a continuum rather than a dichotomy. The more domain knowledge you possess, the more likely you are to find a good answer. However, what if you’re in a new domain where you don’t have relevant expertise to hand? In that case, I’ll suggest that there are benefits to some approaches over others, and training those general skills is justifiable. That is, general skills are weaker than domain specific skills, but general skills are better than nothing!

We know that there are practices that improve outcomes. For instance, I’ve written about how to, and not to, do brainstorming. Similarly, I believe Harold Jarche’s Seek-Sense-Share model works across domains. Systematic creativity is  not an oxymoron!  That’s the story I’m holding on to about generic thinking skills. What are your thoughts on the topic?

Reflecting (on 2021)

28 December 2021 by Clark Leave a Comment

I don’t think I’ve made a habit of it, but it occurs to me that it might be good to reflect a little on this past year. In particular, I want to revisit the areas I’ve been focusing on. There’ve been some emergent themes, and it’s worth it (to me, at least ;) to think a bit more about them. So here’s what I’m thinking about while reflecting on 2021.

Obviously, the cognitive and learning sciences have been a theme. The publication of my book on  learning science this year was a catalyst, as you might expect. In it, I covered not only the basics, but some of the extended areas. These extended areas include thinking about situated learning and the importance of context, distributed cognition and the use of external representations, and an area new for me, embodied cognition including gesture and motion. Annie Murphy Paul’s  The  Extended Mind covers these nicely.

Another topic is talking about engagement (including four posts on the topic, starting here). Which I view as the complement to the learning science side. I think of learning experience design as the elegant integration of learning science and engagement, and am continually working to create a definitive approach to the latter as I’ve done with the former. (Stay tuned.)

Coping with change is another recurrent theme. As we are facing increasing chaos, the ability of organizations to adapt requires innovation. Which, really, is a form of learning. I argue further that it’s an area L&D  should be engaged in. Agility will be a critical differentiator for organizations, and it’s an opportunity to be more central to organizational success.

I’ve also been on about how the transformation organizations need shouldn’t start with digital. I think this is an increasingly important realization in this era of change. To be successful, organizations need to work in coherence with how their people think, work, and learn. If you get that right, digitization can facilitate outcomes. However, if you digitize some of the old approaches that are holdovers from prior eras, you can limit the effectiveness of the investments.

Reviewing my past year’s posts, there’s a mix of other topics. I’ve continued my usual ‘takedowns’ of myths, shared thoughts on education, and unpacking nuances of learning design. A mixed bag, but then this blog is about my various ideas. So that’s my current reflections.

Take note, there will be some changes to announce come the new year. Until then, please have a safe and happy holiday season, and best wishes for the new year.

 

Time for Reflection

21 December 2021 by Clark 1 Comment

My dad used to regale me with this tale of his best friend, who told his new employer when he started: “If you see me with my feet up on my desk and it appears I’m sleeping, I’m not. I’m working. I’ll still do the work of 2 other engineers.” And he did!  I love this story, because it brings out an element that we seem to be losing, the value of taking time for reflection.

Now, he may actually have been sleeping, yet that doesn’t concern me; sleeping is a mechanism for processing, too. What concerns me are folks who can’t be seen to be taking time off from ‘the work’. We’re in a mode where we push people to work harder and faster. We say “work smarter” but don’t tell people what that means!

I’ve spent time in a job early in my career reading (relevant) magazines like Byte, with my feet up on the desk. Yet, I immodestly suggest I cranked out work  at least as fast as my colleagues. I found reading, and now searching for answers, to be a valuable use of my time. Why? Because I’m  learning. I reflect on what I do and how to do it better, learning to do new things that I need to meet my current challenges.

Sure, I do the work. However, I also take walks, put my feet up and ponder, and more. I blog, for instance ;). There are other ways I write as well, and experiment, and look to refine my thinking. Also, I look things up, read books, and generally track my field and answers to specific questions.  My work improves as a consequence. Moreover, we  all benefit from taking time to reflect. It’s documented in the work conducted by Garvin, Edmondson, & Gino as one of the elements of a learning organization.

So, I’ll keep promoting, and practicing, taking time for reflection. I hope you can, too. Moreover, I hope you can help get such time recognized as valuable in your organization. We focus too much on the fast, and as they say: “fast, cheap, or good, pick 2”. I’m not sure fast is always the best solution. Certainly for learning. After all, it is about learning…

 

The case for good timekeeping

7 December 2021 by Clark Leave a Comment

It occurs to me that maybe not everyone has the same view of timekeeping that I do. So I thought I’d make the case for good timekeeping. To me, it’s about  coping with change.

To me, it starts with respect. This includes respect for the audience, the speaker, topic, and context. There are times when timekeeping  should be lax, I believe. My first take is that the defining circumstances are when there are no people involved who aren’t already part of it, there’s not a fixed time agenda, speaker times aren’t set, and the outcome is more important than punctuality.. In other words, rare for a public event.

When an event is public, there tend to be some other constraints. Most importantly, there’s liable to be a schedule. People need to know when to arrive. If it’s a more than one event in the schedule, extending beyond 1 hour, and the audience is diverse (e.g. not just one company), time becomes increasingly important.

Why should we care? Back to the starting point, people might be coming in to see someone in particular. If the schedule isn’t adhered to, folks who were counting on a particular time could be disappointed. If there’s a start and end time, and block, speakers further down the agenda could be impacted if someone runs long. Neither is fair.

I have experienced folks who seem to be unaware of time. In my personal experience as a speaker, I was on a joint presentation where the leadoff presenter seemed to forget that there was anyone else on the agenda! He was gracious once it was pointed out, but it was uncomfortable for me to have to break in and remind him. I’ve also seen people unaware that they were running long, and others unable to amend their presentation on the fly when they  did become aware. I’ve literally seen someone have to stop where they are instead of finding a way to wrap up!

Having had early experience  being  a moderator, after being the victim of such sessions, I made a commitment (in line with the event organizer’s intent) to be rigorous. I’ve now no qualms about, after giving fair warning, stopping someone who hasn’t maintained control. To the contrary of a possible position, I think it’s rude  not to! It’s the speaker’s fault, no one else’s. They aren’t being professional and respectful of the audience and the other speakers. That also includes getting out of the way in a timely manner if it’s a scheduled room, leaving time for the next person to set up.

So my guidelines for timekeeping:

  1. Designate someone as the timekeeper; there can’t be a question over who’s job it is.
  2. Warn speakers ahead of time about the rigor and practices. No excuses!
  3. Have a practice for signaling if things are getting close, e.g. signs for # of mins left, colored lights, messages in chat (tho’ some people seem unable to process them; beware), what have you.
  4. As a presenter, if you don’t have a good basis for assessing your likely length (e.g. I’m about a slide a minute, though with some quick builds it can be faster), practice and check your timing! Realize that live it’s likely to go a bit longer than your practice. Trim if necessary.
  5. Also, have enough awareness of your material that you can adapt on the fly. Sometimes other things happen (once the power went out in a presentation), and you have to adjust.
  6. Be firm; interrupt and stop speakers when it’s time.

That’s off the top of my head; I’m sure there are more comprehensive and thorough lists. My point is to be aware, and prepared. As a speaker, I appreciate it. As an audience member as well. That’s my case for good timekeeping.  What do you think?

Beyond Industrial Age Thinking

23 November 2021 by Clark 6 Comments

I’ve long maintained that our organizational practices are too often misaligned with how our brains really work. I’ve attributed that to a legacy from previous eras. Yet, I realize that there may be another legacy, a cognitive one. Here I’ll suggest we need to move beyond Industrial Age thinking.

The premise comes from business. We transitioned from a largely agricultural economy to a manufacturing economy, of goods and services. Factories got economic advantage from scale. We also essentially treated people as parts of the machine. Taylorism, aka scientific management, looked at how much a person could produce if they were working as efficiently as possible, without damage. So few were educated, and we didn’t have sufficiently sophisticated mechanisms. Times change, and we’re now in an information age. Yet, a number of our approaches are still based upon industrial approaches. We’re living on a legacy.

Now I’m taking this is to our models of mind. The cognitive approach is certainly more recent than the Industrial Age, but it carries its own legacies. We regularly take technology as metaphors for mind. Before the digital computer, for instance, telephone switching was briefly used as a model. The advent of the digital computer, a general purpose information system, is a natural next step. We’re information processing machines, so aren’t we like computers?

It turns out, we’re not. There’s considerable evidence that we are not formal, logical, reasoning machines. In fact, we do well what it’s hard to get computers to do, and vice-versa. We struggle to remember large quantities of data, or abstract and arbitrary information, and to remember it verbatim. Yet we also are good at pattern-matching and meaning-making (sometimes  too good; *cough* conspiracy theories *cough*). Computers are the opposite. They can remember large quantities of information accurately, but struggle to do meaning-making.

My concern is that we’re still carrying a legacy of formal reasoning. That is, the notion that we can do it all in our heads, alone, continues though it’s been proven inaccurate. We make inferences and take actions based upon this assumption, perhaps not even consciously!

How else to explain, for instance, the continuing prevalence of information presentation under the guise of training? I suggest there’s a lingering belief that if we present information to people, they’ll logically change their behavior to accommodate. Information dump and  knowledge test are a natural consequence of this perspective. Yet, this doesn’t lead to learning!

When we look at how we really perform, we recognize that we’re contextually-influenced, and tied to previous experience. If we want to do things differently, we have to  practice doing it differently. We can provide information (specifically mental models, examples, and feedback) to facilitate both initial acquisition and continual improvement, but we can’t just present information.

If we want to truly apply learning science to the design of instruction, we have to understand our brains. In reality, not outdated metaphors. That’s the opportunity, and truly the necessity. We need to move beyond Industrial Age thinking, and incorporate post-cognitive perspectives. To the extent we do, we stand to benefit.

Higher-education Myths

12 October 2021 by Clark 4 Comments

I’ve been in a variety of higher education roles in several ways: as a victim, er, student; as a grad student; post-doc; tenured/promoted faculty member; textbook publishing consulting;  strategic elearning consulting… Further, in general, I’m a supporter. I do have quibbles, and one is the persistence of learning myths. Trust me, I wrote a whole book on what the research says about them! In addition to having talked about org learning myths, let me explore some elements of higher-education myths.

I saw an article in the top education news source in the country, The Chronicle of Higher Education. I get their daily newsletter, just to keep my finger on the pulse. However, this article was touting issues for Gen Z students. Yet, research says that the ‘generations’ framework isn’t valid. There’s no reliable data that generations is a viable discrimination. In fact, it literally  is discrimination (in terms of using arbitrary distinctions to label people.

This is only part of the broader problem. A colleague regularly chides his alma mater for continuing to believe in learning styles. This, too, is a myth! While learners do differ, there’s no evidence we should adapt learning to learning styles, let alone can we reliably identify them. It’s appealing, but wrong. Not that it isn’t also prevalent amongst K12 teachers as well.

Which is related to another problem, business school curricula. I was surprised, and dismayed, to find that a prominent business school has personality instruments as part of it’s curricula! This includes MBTI, which is discredited both theoretically and empirically. Other such instruments, also with flaws, continue to be indicated. I’m sure there may be some financial motivation as well. (E.g. like Apple & Microsoft did offering huge discounts to schools, to get new users used to their experience.)

We should not tolerate learning myths in university. Aren’t these bastions of science? Ok, that’s another myth, that universities aren’t riven with politics, but that’s not the focus here. Still, universities should be better at rejecting myths, just as they should also be better about using the best pedagogies. Which they also aren’t doing, by and large ;).

There are more myths about universities, and issues like what their role in society  should be. That’s not what I’m talking about here, though. For all their other issues, they should  not be perpetuators of higher-education myths. (Here’s hoping they’re also not guilty of the ‘attention span of a goldfish’ myth!)

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.