Learnlets

Secondary

Clark Quinn’s Learnings about Learning

For ‘normals’

23 January 2024 by Clark 5 Comments

So, I generally advocate for evidence-based practices. And, I realized, I do this with some prejudice. Which isn’t my intent! So, I was reflecting on what affects such decisions, and I realized that perhaps I need a qualification. When I state my prescriptions then, I might have to add “for ‘normals'”.

First, I have to be careful. What do I mean by ‘normal’? I personally believe we’re all on continua on many factors. We may not cross the line to actively qualify as obsessive-compulsive, or attention-deficit, or sensorily-limited. Yet we’re all somewhere on these dimensions. Some of us cross some or more of those lines (if we’re ever even measured; they didn’t have some of these tests when I was growing up). So, for me, ‘normal’ are folks who don’t cross those lines, or cope well enough. Another way to say it is ‘neurotypical’ (thanks, Declan).

What prompted this, amongst other things, is a colleague who insisted that learning styles did matter. In her case, she couldn’t learn unless it was audio, at least at first. Now, the science doesn’t support learning styles. However, if you’re visually-challenged (e.g. legally blind), you really can’t be a visual learner. I had another colleague who insisted she didn’t dream in images, but instead in audio. I do think there are biases to particular media that can be less or more extreme. Of course, I do think you probably can’t learn to ride a bicycle without some kinesthetic elements, just as learning music pretty much requires audio.

Now, Todd Rose, in his book The End of Average, makes the case that no one is average. That is, we all vary. He tells a lovely story about how an airplane cockpit carefully designed to be the exact average actually fit no one! So, making statements about the average may be problematic. While we’ve had it in classrooms, now we also have the ability to work beyond a ‘one-size fits all’ response online. We can adapt based upon the learner.

Still, we need to have a baseline. The more we know about the audience, the better a job we can do. (What they did with cockpits is make them adjustable. Then, some people still won’t fit, at least not without extra accommodation)  That said, we will need to design for the ‘normal’ audience. We should, of course, also do what we can to make the content accessible to all (that covers a wide swath by the way). And, while I assume it’s understood, let me be explicit here that I am talking “for ‘normals'”. We should ensure, however, that we’re accommodating everyone possible.

Facilitating in the dark

16 January 2024 by Clark 1 Comment

I recently spoke to the International Association of Facilitators – India, having chosen to focus on transfer. My intent was for them to be thinking about ensuring that the skills they facilitate get applied when useful. My preparation was, apparently, insufficient, leaving me to discover something mid-talk. Which leads me to reflecting on facilitating in the dark.

So, I’m not a trained facilitator (nor designer, nor trainer, nor coach). While I’ve done most of this (with generally good results),I’m guided by the learning science behind whatever. So, in this case, I thought they were facilitating learning by either serving as trainers or coaches. Imagine my surprise when I found out that they largely facilitate without knowing the topic!

In general, to create learning experiences, we need good performance objectives. From there, we design the practice, and then align everything else to succeed on the final practice. We also (should) design the extension of the learning to coaching past any formal instruction, and generally ensuring that the impact isn’t undermined.

How, then, do you get models, examples, and provide feedback on practice if you don’t know the domain? What they said was that they were taking it from the learners themselves. They would get the learners together and facilitate them into helping each other, largely. This included creating an appropriate space.

To me, then, there are some additional things that need to be done. (And I’m not arguing they don’t do this.) You need to get the learners to:

  • articulate the models
  • provide examples
  • ensure that they articulate the underlying thinking
  • think about how to unpack the nuances
  • ensure sufficient coverage of contexts
  • provide feedback on others’ experiences

This is in addition to creating a safe space, opening and closing the experience, etc.

So it caused me to think about when this can happen. I really can’t see this happening for novices. They don’t know the frameworks and don’t have the experience. They need formal instruction. Once learners have had some introduction and practice, however, this sort of facilitation could work. It may be a substitute for a community of practice that might naturally provide this context. You’d just be creating the safe space in the facilitation instead of the community.

The necessary skills to do this well, to be agile enough mentally to balance all these tasks, even with a process, is impressive.  I did ask whether they ended up working in particular verticals, because it does seem like even if you came in facilitating in the dark, you couldn’t help learn while doing the facilitation. There did seem to be some agreement.

Overall, while I prefer people with domain knowledge doing facilitation, I can see this. At least, if the community can’t do it itself. We don’t share enough about learning to learn, and we could. I do think a role for L&D is to spread the abilities to learn, so that more folks can do it more effectively. The late Jay Cross believed this might be the best investment a company could make!

Nonetheless, while facilitating in the dark may not be optimal, it may be useful. And that, of course, is really the litmus test. So it was another learning opportunity for me, and hopefully for them too!

 

Myths are models

9 January 2024 by Clark 4 Comments

A recent LinkedIn post talked about how models are good, but myths are bad. Which was a realization for me. I’ve kept myths and models largely separate in my mind, but I realize that’s not the case. Myths are models, just wrong ones. And, I suppose, we need to deal with them as such. (Also, folks hang on to myths and models if they’re tied up with identity, but we should still be able to deal with the logical rationale.)

So, I’m an advocate for mental models. There are a variety of reasons, personal, pragmatic, and principled. Personally, I was gifted a book on mental models by my workmates as I left for graduate school. Pragmatically, they’re useful. On principle, they’re how we reason about the world. Heck, our brains are constantly building them!

The important aspects of models are that they’re predictive (and explanatory). That is, they tell us the outcomes of actions in particular situations. They are models of a small bit of the world, and are used to understand a perturbation of the model. They’re causal, in that they talk about how the world works, and conceptual in that they talk about the elements of the world. They’re incomplete, in that they only need to account for the parts of the world relevant to the particular situation.

Examples include using an analogy of water flowing in pipes for thinking about electric circuits. Or how advertisements use association with valued things or people to induce a positive affect. You can use them to explain what happened, or what will happen. It’s the latter that’s important for the purposes of providing a basis for guiding decisions, and thus their role in learning.  They guide us in deciding how to take actions under different circumstances.

Models can be good or bad. The old ‘planets circling a sun’ model of electrons in orbit around a nucleus of protons and neutrons turned out to be inaccurate as our understanding increased. We then moved to probability clouds as a better model. Many of our mistakes come from using the wrong model, for a variety of reasons. We can mistake the situation, or think a model is accurate and useful.

We should avoid models that aren’t appropriate for the situation. Myths are models that aren’t appropriate for any situation. So, for instance, learning styles, generations, ‘attention span of a goldfish’, and ‘images are processed 60K faster than prose’ are examples of myths. They lead people to make decisions that are erroneous, such as providing different learning prescriptions. They are models, because they do categorize the world and lead to prescriptions about what to do. They’re myths, because their implications will lead to decisions that waste time and money.

As the saying goes, “all models are wrong, but some are useful”. They’re wrong because they’re only part of the world. The good ones give us useful predictions, The bad ones lead us to make bad decisions. The useful ones are to be lauded, shared, and used. Myths, however, should be debunked and avoided. Myths are models, but not all models are good. It’s important that I remember that!

Quality or Quantity?

2 January 2024 by Clark 4 Comments

Recently, there’s been a lot of excitement about Generative Artificial Intelligence (Generative AI). Which is somewhat justified, in that this technology brings in two major new capabilities. Generative AI is built upon a large knowledge base, and then the ability to generate plausible versions of output. Output can in whatever media: text, visuals, or audio. However, there are two directions we can go. We can use this tool to produce more of the same more efficiently, or do what we’re doing more effectively. The question is what do we want as outcomes: quality or quantity?

There are a lot of pressures to be more efficient. When our competitors are producing X at cost Y, there’s pressure to do it for less cost, or produce more X’s per unit time. Doing more with less drives productivity increases, which shareholders generally think are good. There’re are always pushes for doing things with less cost or time. Which makes sense, under one constraint: that what we’re doing is good enough.

If we’re doing bad things faster, or cheaper, is that good? Should we be increasing our ability to produce planet-threatening outputs? Should we be decreasing the costs on things that are actually bad for us? In general, we tend to write policies to support things that we believe in, and reduce the likelihood of undesirable things occurring (see: tax policy). Thus, it would seem that if things are good, go for efficiency. If things aren’t good, go for quality, right?

So, what’s the state of L&D? I don’t know about you, but after literally decades talking about good design, I still see way too many bad practices: knowledge dump masquerading as learning, tarted up drill-and-kill instead of skill practice, high production values instead of meaningful design, etc. I argue that window-dressing on bad design is still bad design. You can use the latest shiny technology, compelling graphics, stunning video, and all, but still be wasting money because there’s no learning design underneath it.  To put it another way, get the learning design right first, then worry about how technology can advance what you’re doing.

Which isn’t what I’m seeing with Generative AI (as only the latest in the ‘shiny object’ syndrome. We’ve seen it before with AR/VR, mobile, virtual worlds, etc. I am hearing people saying “how can I use this to work faster”,  put out more content per unit time”, etc, instead of “how can we use this to make our learning more impactful”. Right now, we’re not designing to ensure meaningful changes, nor measuring enough of whether our interventions are having an impact. I’ll suggest, our practices aren’t yet worth accelerating, they still need improving! More bad learning faster isn’t my idea of where we should be.

The flaws in the technology provide plenty of fodder for worrying. They don’t know the truth, and will confidently spout nonsense. Generative AIs don’t ‘understand’ anything, let alone learning design. They are also knowledge engines, and can’t create impactful practice that truly embeds the core decisions in compelling and relevant settings. They can aid this, but only with knowledgeable use. There are ways to use such technology, but it comes from starting with the point of actually achieving an outcome besides having met schedule and budget.

I think we need to push much harder for effectiveness in our industry before we push for efficiency.  We can do both, but it takes a deeper understanding of what matters. My answer to the question of quality or quantity is that we have to do quality first, before we address quantity. When we do, we can improve our organizations and their bottom lines. Otherwise, we can be having a negative impact on both. Where do you sit?

One may not be enough

5 December 2023 by Clark Leave a Comment

A recent intersection of talks leads to an interesting issue for L&D. First, we recently talked to Guy Wallace about his recent book, The L&D Pivot Point. Then, we talked to Julie Dirksen about her new book, Talk to the Elephant. The interesting thing is that there’s some overlap between the two ideas that isn’t immediately obvious, but really important. The realization is that when we’re talking about barriers to success, thinking of one may not be enough.

So, Guy’s book is about taking a step above just thinking of course. He’s a proponent of performance improvement consulting, where you analyze the problem before you decree a course as a solution. The important recognition is that there can be multiple barriers to performance, including a lack of skills indicating a course. However, other reasons might be the wrong incentives, a lack of resources, etc. Sometimes a job aid can do better, some times neither that or a course will suffice.

Julie’s book, on the other hand, is a complement to her first book, Design for How People Learn. She recognized that even good design (what her first book did, eloquently) might not help learning stick, and looked at other barriers, such as managers extinguishing the learning. She was more focused on making the learning design succeed.

What she did, however, is provide a rich suite of potential barriers, along with solutions, and suggest that you may need to address more than one. That goes along with, and complements, Guy’s focus.

Just as you design programs that include messaging, training, support, rewards, and more, you should also ensure that you’ve analyzed all the barriers to performance. You might address learning, provide job aids, ensure incentives are aligned, prepare supervisors, and more. Addressing only a particular situation may not be sufficient. You may have several barriers, When it comes to solutions, one may not be enough. This argues (again) for rigorous analysis and a success focus, not just doing what you are comfortable with. In the long term, I reckon this is where we need to go as we move from learning to performance (and development). your thoughts?

Where are we at?

28 November 2023 by Clark 1 Comment

Signs pointing multiple directions with distances. I was talking with a colleague, and he was opining about where he sees our industry. On the other hand,  had some different, and some similar thoughts. I know there are regular reports on L&D trends, with greater or lesser accuracy. However, he was, and I similarly am looking slightly larger than just “ok, we’re now enthused about generative AI“. Yes, and, what’s that a signal of? What’s the context? Where are we at?

When I’m optimistic, I think I see signs of an awakening awareness. There are more books on learning science, for instance. (That may be more publishers and people looking for exposure, but I remain hopeful.)  I see a higher level of interest in ‘evidence-based’. This is all to the good (if true). That is, we could and should be beginning to look at how and why to use technology to facilitate learning appropriately.

On the cynical side, of course, is other evidence. For example, the interest in generative AI seems to be about ways to reduce costs. That’s not really what we should be looking at. We should be freeing up time to focus on the more important things, instead of just being able to produce more ‘content’ with even less investment. The ‘cargo cult’ enthusiasm about: VR, AR, AI, etc still seems to be about chasing the latest shiny object.

As an aside, I’ll still argue that investing in understanding learning and better design will have a better payoff than any tech without that foundation. No matter what the vendors will tell you!  You can have an impact, though of course you risk having a previous lack of impact exposed…

So, his point was that he thought that more and more leaders of L&D are realizing they need that foundation. I’d welcome this (see optimism, above ;).  Similarly, when I argue that if Pine & Gilmore are right (in The Experience Economy) as to what’s the next step, we should be the ones to drive the Transformation Economy (experiences that transform you).  Still,  is this a reliable move in the field? I still see folks who come in from other areas of the biz to lead learning, but don’t understand it. I’ll also cite the phenomena that when folks come into a new role they need to be seen to be doing something. While them getting their mind around learning would be a good step, I fear that too many see it as just management & leadership, not domain knowledge. Which, reliably, doesn’t work. Ahem.

Explaining the present, let alone predicting the future, is challenging. (“Never predict anything, particularly the future!”) Yet, it would help to sort out whether there is (finally) the necessary awakening. In general, I’ll remain optimistic, and continue to push for learning science, evidence, and more. That’s my take. What’s yours? Where are we at?

The Pivotal Point

14 November 2023 by Clark 3 Comments

We (the Learning Development Accelerator) just released Guy Wallace’s latest tome, The L&D Pivot Point. Then, we had an interview with him to explain what it’s about. Despite having a ring-side seat (I served as editor, caveat emptor), it was eye-opening to hear him talk about what it’s about! It really is about the pivotal point in L&D, when you move from just offering courses to looking at performance. It’s such an important point that it’s worth reiterating.

So, the official blurb for the book talks about his tried and tested processes. In the interview, he talks about how he’s synthesized the work of the leaders of the performance improvement movement, people like Joe Harless, Geary Rummler, Thomas Gilbert, Robert Mager, Thiagi, and more. While the models they used differed, Guy’s created a synthesis that makes sense, and more importantly, works. He talked about how he refined his work to balance effectiveness with efficiency. Moreover, his approach avoids any redundant work.

Interestingly, he also recounted how his approach achieved buy-in from the stakeholders to the extent that he had to fight to not keep them all on the team through all the stages! That’s a great outcome, and it comes from demonstrating value. He focuses on where performance needs are critical, and thus it has a natural interest, but too many of the approaches can stifle that interest. Instead, his intent focus on meaningful outcomes truly engages everyone from the performers to the executives.

Guy also is quite open about the problems facing our industry. Despite the necessity of starting as order takers (essentially, “you can’t say ‘no'”), he estimates that only 20% of the time is the problem a learning or skills problem. Which resonates with other data I’ve seen about the value of training interventions! Instead, there can be many drivers for problems in performance.  His approach includes detailed analyses that identify the root cause of the problem, and when to determine that it’s worth trying an intervention. He’s quite open about how that can lead to a shift in intervention focus. At other times, it might lead to a hiatus while problems get attention.

One other thing I found interesting in the interview was how he talked about potential barriers to success up front. While it might seem like a deterrent, he pointed out how it led to making sense later. That is, folks would soon see that, for instance, supervisor support was critical to success. He includes a rigorous analysis of potential barriers as part of the book.

Quite simply, L&D has a problem of going from go-to-whoa without considering whether a course is the right solution. Guy’s book is a way to avoid doing that, and systematically evaluating what the pivotal point should be for determining whether we can successfully intervene or not, and how. There’s much more: how to manage the process, deal with stakeholders, and test your assumptions. It’s in his own inimitable style (lessons learned on editing ;), but there’s deep wisdom there. That’s my take, at least, I welcome yours.

Getting Engagement Right

9 November 2023 by Clark Leave a Comment

I’m on record stating that I think learning experience design (LXD) is the elegant integration of learning science and engagement. In addition, I’ve looked at both. Amongst the things that stand out for me are that there are an increasing suite of resources for learning science. For one, I have my own book on the topic! There are other good ones too. However, on the flip side, for engagement, I didn’t find much. I had intended to write an LXD book, but then ATD asked for the learning science one. Once it was done, however, I quickly realized that I wanted to write the complement. Thus, Make It Meaningful was born. It’s available, but I’m also running a workshop on the topic, starting this coming week! Four weekly 2 hour meetings, at the convenient time of noon ET. It’s all about getting engagement right. So, what’s covered?

For the first week, there’s an overview of the importance of engagement, and how to set the ‘hook’. We’ll briefly review the reasons why to consider the engagement side (and trust me, this is something you want to do). Then we’ll talk about the first step, getting learners to the a motivated state to begin the learning. We’ll look at barriers to success as well, and what to do.

In the second week, we’ll talk about ‘landing’ the experience. Once the hook is set, it doesn’t mean you’ve got them through the experience. Instead, there’s much to do to maintain that motivation. In addition, you want to ensure that anxiety doesn’t overwhelm the learning, and you want to build confidence. We’ll talk about principles as well as heuristics.

In the third week, we dig into what this means in practical terms. What is an engaging introduction? What about the models and examples? The critical element is the practice that learners perform. We’ll talk about how aligning the practice with the desired objectives while making a compelling context is necessary, but doable.

In the last week, we’ll talk about making a design process that can reliably deliver on learning experience. We’ll take a generic design process and go through the changes that ensure both an effective learning design and an engaging experience. We’ll work from analysis, through specification, and on to evaluation (we won’t talk much about implementation, because of my quip that getting the design right leaves lots of ways to create the solution, and not doing so renders everything else extraneous.

Sure, you can just buy the book, and that’s ok. I’m all about getting the word out, and getting better learning happening for our learners. However, in the workshop, not only do you get the book, but we’ll work through the ideas systematically, put them into practice, and address the individual questions you may have. Look, getting engagement right is an advanced topic, but it’s increasingly what will differentiate our solutions from the knowledge ones that come from typical approaches, no matter how technologically augmented. This stuff matters! So, I hope to see you there.

DnD n LnD

31 October 2023 by Clark 2 Comments

multi-sided diceLast Friday, I joined in on a Dungeons & Dragons (DnD) campaign. This wasn’t just gratuitous fun, however, but was explicitly run to connect to Learning & Development folks (LnD). Organized by the Training, Learning, and Development Community (a competitor to LDA? I have bias. ;), there was both some preliminary guidance, and outcomes. I was privileged to play a role, and while not an official part of the followup (happening this week), I thought I’d share my reflections.

So, first, my DnD history. I played a few times while in college, but… I gave it up when a favorite character of mine was killed by an evil trap (that was really too advanced for our party). I’ve played a lot of RPGs since then, with a lot of similarities to the formal DnD games (tho’ the actual ones are too complex). Recently, with guidance from offspring two, our family is getting back into it (with a prompt from a Shakespeare and DnD skit at the local Renaissance Faire).

Then, I’ve been into games for learning since my first job out of college, programming educational computer games. It also became the catalyst for my ongoing exploration of engagement to accompany my interest in cognition/learning, design, and technology. The intersection of which is where I’ve pretty much stayed (in a variety of roles), since then! (And, led to my first book on how to do same.)

Also, about DnD. It’s a game where you create a character. There are lots of details. For one, your characteristics: strength, dexterity, wisdom, intelligence, and more. Those combine with lots of attributes (such race & role). Then, there’s lots of elaboration: backstory, equipment, and more. This can alter during the game, where your abilities also rise. This adds complexity to support ongoing engagement. (I heard one team has been going for over 40 years!)

Characters created by the players are then set loose in a campaign (a setting, precipitating story, and potential details). A Dungeon Master runs the game, Keegan Long-Wheeler in our case, writing it and managing the details. Outcomes happen probabilistically by rolling dice. Computers can play a role. For one, through apps that handle details like rolling the dice. Then folks create games that reflect pre-written campaigns.

One important thing, to me, is that the players organize and make decisions together. We were a group who didn’t necessarily know each other, and we were playing under time constraints. This meant we didn’t have the dialog and choices that might typically emerge in such playing. Yet, we managed a successful engagement in the hour+ we were playing. And had fun!

I was an early advocate of games for learning. To be clear, not the tarted up drill and kill we were mostly doing, but inspired by adventure games. John Carroll had written about this back in the day, I found out. However, I’d already seen adventure games having the potential to be a basis for learning. Adventure games naturally require exploring. In them, you’re putting clues together to choose actions to overcome obstacles. Which, really, is good learning practice! That is, making decisions in context in games is good practice for making decisions in performance situations. Okay, with the caveat that you should design the game so that decisions have a natural embed.

The complexity of DnD is a bit much, in my mind, for LnD, but…the design!  The underlying principles of designing campaigns bears some relation to designing learning experiences. I believe designing engaging learning may be harder than designing learning or games, but we do have good principles. I do believe learning can, and should, be ‘hard fun‘.  Heck, it’s the topic of my most recent tome! (I believe learning should be the elegant integration of learning science with engagement.)

This has been an opportunity to reflect a bit on the underlying structure of games, and what makes them work. That’s always a happy time for me. So, I’m curious what you see about the links between games and learning!

Bad research

17 October 2023 by Clark 1 Comment

How do you know what’s dubious research? There are lots of signals, more than I can cover in one post. However, a recent discovery serves as an example to illustrate some useful signals. I was trying to recall a paper I recently read, which suggested that reading is better than video for comprehending issues. Whether that’s true or not isn’t the issue. What is the issue is that in my search, I came across an article that really violated a number of principles. As I am wont to do, let’s briefly talk about bad research.

The title of the article (paraphrasing) was “Research confirms that video is superior to text”. Sure, that could be the case! (Actually the results say, not surprisingly, that one media’s better for some things, and another’s better at other; BTW, one of our great translators of research to practice, Patti Shank, has a series of articles on video that’s worth paying attention to.) Still, this article claimed to have a definitive statement about at least one study. However, when I looked at it, there were several problems.

First, the study was a survey asking instructors what they thought of video. That’s not the same as an experimental study! A good study would choose some appropriate content, and then have equivalent versions in text and video, and then have a comprehension test. (BTW, these experiments have been done.) Asking opinions, even of experts, isn’t quite as good. And these weren’t experts, they were just a collection of instructors. They might have valid opinions, but their expertise wasn’t a basis for deciding.

Worse, the folks conducting the study had. a. video. platform.  Sorry, that’s not an unbiased observer. They have a vested interest in the outcome. What we want is an impartial evaluation. This simply couldn’t be it. Not least, the author was the CEO of the platform.

It got worse. There was also a citation of the unjustified claim that images are processed 60K times better than text, yet the source of that claim hasn’t been found! They also cited learning styles! Citing unjustified claims isn’t a good practice in sound research. (For instance, when reviewing articles, I used to recommend rejecting them if they talked learning styles.) Yes, it wasn’t a research article on it’s own, but…I think misleading folks isn’t justified in any article (unless it’s illustrative and you then correct the situation).

Look, you can find valuable insights in lots of unexpected places, and in lots of unexpected ways. (I talk about ‘business significance’ can be as useful as statistical significance.) However, an author with a vested interest, using an inappropriate method, to make claims that are supported by debunked data, isn’t it. Please, be careful out there!

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok