Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Competencies and Innovation?

30 October 2018 by Clark Leave a Comment

This may seem like an odd pairing, so bear with me.  I believe that we want to find ways to support organizations moving in the direction of innovation and learning cultures. Of course, I’ve been on a pretty continuous campaign for this, but I’m wondering what other levers we have. And, oddly, I think competencies may be one. Let me make the case for competencies and innovation.

So I’ve gotten involved in standards and competency work. Don’t ask me why, as I have no better answer than a) they asked, and b) the big ‘sucker’ tattoo on my forehead.  Of course, as I’ve said before, the folks that do this stuff (besides me, obviously) are really contributing to the benefit of our org. Maybe I felt I had to walk the talk?

In the course of the one that was just launched, we identified a number of competencies across the suite of L&D activities. This included (in addition the more traditional activities) looking at how to foster innovation. This means understanding culture and the change processes to get there, as well as knowing how to run meetings that get the best outputs. It’s about being prepared for both types of innovation, fast (solve ‘this’ problem) and slow (the steady percolation of ideas).

Thus, the necessary skills are identified as a component of a full suite of L&D capabilities. And the hope, of course, is that people will begin to recognize that there are parts of L&D they’re not addressing, and move to take on this opportunity. I hope that it’s becoming obvious that the ability to facilitate innovation is an organizational imperative, and that there’s a strong argument for L&D to be key. This is on principle, and pragmatically, it’s a no-brainer for L&D to find a way to become central to org success, not peripheral.

However, leaving that to chance would be, well, just silly. What can we do?  Well, two things, I think: one is to help raise awareness, the other is to provide support. A suite of skills aligned to this area is a ‘good thing’ if it known and used. Working on the know has been an ongoing thing (*cough*), but how can we support it?

Again, two things, I think. One are examples where people have put in place programs where they’ve oriented themselves in this direction and documented benefits. The other is to provide scaffolding; support materials that help folks implement these competencies. And I believe that’s coming.

“Systematic creativity is  not an oxymoron” (I may need to make a quip post about that). And this is an example. Think of brainstorming, for example. It can be useful, or  not. When done right, the outcomes are much better. And similarly in lots of ways, the nuances matter. If we define, through competencies, what suites of knowledge matter, we bring awareness to the possible outcomes. And the opportunity to improve them.

It may be an indirect path, to be sure, but it’s a steady, and real one. In fact, to say “we want to innovate, but how” and have a suite of specific sets of knowledge on tap to point people to, is pretty much next to the fastest path.  Showing people the benefits and the path to obtain them is key. It’s even self-referential: let’s innovate on making innovation systematically embedded in organizations! ;)  So, keep on experimenting!

Constraints on activities

23 October 2018 by Clark 2 Comments

When we design learning activities (per the activity-based learning model), ideally we’re looking to create an integration of a number of constraints around that assignment. I was looking to enumerate them, and (of course) I tried diagramming it.  Thought I’d share the first draft, and I welcome feedback!

Multiple constraints on assignmentsThe goal is an assignment that includes the right type of processing. This must align with what they need to be able to do after the learning experience. Whether at work or in a subsequent class. Of course, that’s factored into the objective for this learning activity (which is part of an overall sequence of learning).

Another constraint is making sure the setting is a context that helps establish the breadth of transfer. The choice should be sufficiently different from contexts seen in examples and other practices to facilitate abstracting the essential elements. And, of course, it’s ideally in the form of a story that the learner’s actions are contributing to (read: resolve). The right level of exaggeration could play an (unrepresented) role in that story.

We also need the challenge in the activity to be in the right range of difficulty for the learner. This is the integration of flow and learning to create meaningful engagement.  And we want to include ways in which learners typically go wrong (read: misconceptions). Learners need to be able to make the mistakes here so we’re trapping and addressing them in the learning situation, not when it could matter.

Finally, we want to make sure there’s enough variation across tasks. While some similarities benefit for both consistency and addressing the objective, variety can maintain interest. We need to strike that balance. Similarly, look at the overall workload: how much are we expecting, and is that appropriate given the other constraints outside this learning goal.

I think you can see that successfully integrating these is non-trivial, and I haven’t even gotten into how to evaluate this, particularly to make it a part of an overall assessment. Yet, we know that multiple constraints help make the design easier (at least until you constrain yourself to an empty solution set ;).  This is probably still a mix of art and science, but by being explicit you’re less likely to miss an element.

We want to align activities with the desired outcome, in the full context.  So, what am I missing?  Does this make sense?

 

Processing

18 October 2018 by Clark Leave a Comment

I’ve been thinking a lot about processing in learning of late; what processing matters, when, and why. I thought I’d share my thinking with you and see what you think.  This is  my processing!  :)

We know processing is useful. You can consider Craik & Lockhart’s Levels of Processing model, or look to the importance of retrieval practice as highlighted in Brown, Roediger, and McDaniel’s Make it Stick. The point is that retrieving information from memory and doing things with it increases the likelihood of learning. One of the questions is  “what sort of retrieval (or processing)?”

I’ve always advocated for  applying the information, doing something with it.  But there are actually a variety of useful things we can do:

  • representing information (a form of reflection) whether rewriting, or mindmapping, or…
  • connecting to other known information, personal or professional
  • considering how it would be applied in practice
  • applying it in practice, real or simulated

Of course, we want there to be scrutiny and feedback for the learning to be optimized, etc.

Now, this is in the individual instance, but I’m also looking at the sequence of processing. What would be a series of activities that would develop understanding. So, for instance, for a problem-solving practice like trouble-shooting a process, what might you do? You might have  (say, after a model of the process, and examples) a sequence of :

  • critique someone else’s performance
  • try a simple example of performing
  • try a more complex example (perhaps in a group)
  • …(more examples of performing)
  • try a very complex (read: typical) example

We could throw in related tasks as well either during or as a summary:

  • create a checklist to follow
  • draw a flow diagram
  • create a representation

On a more categorical task, say determining whether a situation qualifies as this or not (with shades of grey in between), we would have a similar structure, but with different types of tasks (again, after initial content such as definition and examples):

  • review a case where it clearly is (white)
  • review a case where it clearly isn’t (black)
  • group review a case of grey (but not too bad)
  • group review a case of grey (more shady)
  • …

Again, we could have interim or summary tasks:

  • summarize the constraints
  • document a proposed process
  • make a plan for how to do it in the future
  • …

What I’ve explicitly added here is when and why to go ‘social‘.  There are benefits for the same, but should they all be social?  I’ll argue that there’s some initial prep that’s individual, to get everyone on the same page. Since all are different, it helps if this is individual. Then there’s often value in doing it socially, for the reasons in the linked post.  Then, I reckon there’s value in doing  something independently, to consolidate the learning. And, of course, to determine what capability the individual has acquired.

The point I want to make is that the processing  flow, the progression from activity to activity, matters. We want to introduce, diverge, and then converge.  We do need to elaborate across contexts to support transfer, and of course increase complexity until they’ve developed the ability to deal with the typical difficulty of cases.

I’m thinking that, too often, we forget the consolidation phase.  And we’re often doing processing that’s somewhat like what we need them to do, but ultimately tangential. There are multiple constraints here to be acknowledged, cognitive such as depth and breadth as well as pragmatic such as cost and time, but we want to find the right intersection.

And my practical question is: where does this fall apart? Are their situations where this doesn’t make sense?  I realize there are other types of outcomes that I haven’t represented (I’m being indicative, not exhaustive ;), but is this a useful way to think about it?

 

Labels, models, and drives

16 October 2018 by Clark Leave a Comment

In my post last week on engagement, I presented the alignment model from my  Engaging Learning  book on designing learning experiences. And as I thought about the post, I pondered several related things about labels, models, and drives. I thought I’d wrestle with them ‘out loud’ here, and troll (in the old sense) to see what you think.

Some folks have branded a model and lived on that for their career. And, in a number of cases, that’s not bad: they’re useful models and their applicability hasn’t diminished. And while, for instance, I think that alignment model is as useful as most models I’ve seen, I didn’t see any reason to tie my legacy to it, because the principles I like to comprehend and then apply to create solutions aren’t limited to just engagement. Though I wonder if people would find it easier to put the model in practice if it had a label.  The Quinn Engagement model or somesuch?

I’ve also created models around mobile, and about performance ecosystems, and more. I can’t say that they’re all original (e.g. the 4Cs of mobile), though I think they have utility. And some have labels (again, the 4Cs, Least Assistance Principle…) Then the misconceptions book is very useful, but the coverage there isn’t really mine, either. It’s just a useful compendium. I expect to keep creating models. But it’d led to another thought…

I’ve seen people driven to build companies. They just keep doing it, even if they’ve built one and sold it, they’re always on it; they’re serial entrepreneurs. I, for instance, have no desire to do that. There are elements to that that aren’t me.    Other folks are driven to do research: they have a knack for designing experiments that tease out the questions that drive them to find answers. And I’ve been good at that, but it’s not what makes my heart beat faster. I do  like action research, which is about doing with theory, and reflecting back. (I also like helping others become able to do this.)

What I’m about is understanding and applying cognitive science (in the broad sense) to help people do important things in ways that are enabled by new technologies.  Models that explain disparate domains are a hobby. I like finding ways to apply them to solve new problems in ways that are insightful but also pragmatic.   If I create models along the way (and I do), that’s a bonus. Maybe I should try to create a model about applying models or somesuch. But really, I like what I do.

The question I had though, is whether anyone’s categorized ‘drives’.  Some folks are clearly driven by money, some by physical challenges. Is there a characterization?  Not that there needs to be, but the above chain of thought led me to be curious. Is there a typology of drives? And, of course, I’m skeptical if there is one (or more), owing to the problems with, for instance, personality types and learning styles :D. Still, welcome any pointers.

Another Day Another Myth-Ridden Hype Piece

9 October 2018 by Clark 1 Comment

Some days, it feels like I’m playing whack-a-mole. I got an email blast from an org (need to unsubscribe) that included a link that just reeked of being a myth-ridden piece of hype.  So I clicked, and sure enough!  And, as part of my commitment to showing my thinking, I’m taking it down. I reckon it’s important to take these myths apart, to show the type of thinking we should avoid if not actively attack.  Let me know if you don’t think this is helpful.

The article starts by talking about millennials. That’s a problem right away, as millennials is an arbitrary grouping by birthdate, and therefore is inherently discriminatory. The boundaries are blurry, and most of the differences can be attributed to age, not generation. And that’s a continuum, not a group. As the data shows.  Millennials is a myth.

Ok, so they go on to say: “Changing the approach from adapting to Millennials to leveraging Millennials is the key…”  Ouch!  Maybe it’s just me, but while I like to leverage assets, I think saying that about people seems a bit rude.  Look, people are people!  You work with them, develop them, etc. Leverage them?  That sounds like you’re using them (in the derogatory sense).

They go on to talk about Learning Organizations, which I’m obviously a fan of.  And so the ability to continue to learn is important.  No argument. But why would that be specific to ‘millennials’?  Er…

Here’s another winner: “They natively understand the imperative of change and their clockspeed is already set for the accelerated learning this requires.”  This smacks of the ‘digital native’ myth.  Young people’s wetware isn’t any different than anyone else’s. They may be more comfortable with the technology, but making assumptions such as this undermines the fact that any one individual may not fit the group mean. And it’s demonstrable that their information skills aren’t any better because of their age.

We move on to 3 ways to leverage millennials:

  1. Create Cross-pollination through greater teamwork.  Yeah, this is a good strategy.  FOR EVERYONE. Why attribute it just to millennials?  Making diverse teams is just good strategy, period. Including diversity by age? Sure. By generation?  Hype. You see this  also with the ‘use games for learning’ argument for millennials. No, they’re just better learning designs! (Ok, with the caveat: if done well.)
  2. Establish a Feedback-Driven Culture to Learn and Grow Together. That’s a fabulous idea; we’re finding that moving to a coaching culture with meaningful assignments and quick feedback (not the quarterly or yearly) is valuable. We can correct course earlier, and people feel more enagaged. Again,  for everyone.
  3. Embrace a Trial-and-Error Approach to Learning to Drive Innovation. Ok, now here I think it’s going off the rails. I’m a fan of experimentation, but trial and error can be smart or random. Only one of those two makes sense. And, to be fair, they do argue for good experimentation in terms of rigor in capturing data and sharing lessons learned. It’s valuable, but again, why is this unique to millennials? It’s just a good practice for innovation.

They let us know there are 3 more ways they’ll share in their next post.  You can imagine my anticipation.  Hey, we can read  two  posts with myths, instead of just one.  Happy days!

Yes, do the right things (please), but  for the right reasons. You could be generous and suggest that they’re using millennials as a stealth tactic to sneak in messages about modern workplace learning.  I’m not, as they seem to suggest doing this largely with millennials. This sounds like hype written by a marketing person. And so, while I advocate the policies, I eschew the motivation, and therefore advise you to find better sources for your innovation practices. Let me know if this is helpful (or not ;).

Why Myths Matter

3 October 2018 by Clark 3 Comments

I’ve called out a number of myths (and superstitions, and misconceptions) in my latest tome, and I’m grateful people appear to be interested.  I take this as a sign that folks are beginning to really pay attention to things like good learning design. And that’s important. It’s also  important not to minimize the problems myths can create. I do that in my presentations, but I want to go a bit deeper.  We need to care about why myths matter to limit our mistakes!

It’s easy to think something like “they’re wrong, but surely they’re harmless”.  What can a few misguided intentions matter?  Can it hurt if people are helped to understand if people are different?  Won’t it draw attention to important things like caring for our learners?  Isn’t it good if people are more open-minded?

Would that this were true. However, let me spin it another way: does it matter if we invest in things that don’t have an impact?  Yes, for two reasons.  One, we’re wasting time and money. We will pay for workshops and spend time ensuring our designs have coverage for things that aren’t really worthwhile. And that’s both profligate and unprofessional.  Worse, we’re also not investing in things that might actually matter.  Like, say,  Serious eLearning. That is, research-derived principles about what  actually works. Which is what we should be getting dizzy about.

But there are worse consequences. For one, we could be undermining our own design efforts. Some of these myths may have us do things that undermine the effectiveness of our work. If we work too hard to accommodate non-existent ‘styles’, for instance, we might use media inappropriately. More problematic, we could be limiting our learners. Many of the myths want to categorize folks: styles, gender, left/right brain, age, etc.  And, it’s true, being aware of how diversity strengthens is important. But too often people go beyond; they’ll say “you’re an XYZ”, and people will self-categorize and consequently self-limit.  We could cause people not to tap into their own richness.

That’s still not the worst thing. One thing that most such instruments explicitly eschew is being used as a filter: hire/fire, or job role. And yet it’s being done. In many ways!  This means that you might be limiting your organization’s diversity. You might also be discriminatory in a totally unjustifiable way!

Myths are not just wasteful, they’re harmful. And that matters.  Please join me in campaigning for legitimate science in our profession. And let’s chase out the snake oil.  Please.

Wise technology?

25 September 2018 by Clark Leave a Comment

At a recent event, they were talking about AI (artificial intelligence) and DI (decision intelligence). And, of course, I didn’t know what the latter was so it was of interest. The description mentioned visualizations, so I was prepared to ask about the limits, but the talk ended up being more about decisions (a topic I  am interested in) and values. Which was an intriguing twist. And this, not surprisingly led me back to wisdom.

The initial discussion talked about using technology to assist decisions (c.f. AI), but I didn’t really comprehend the discussion around decision intelligence. A presentation on DA, decision analysis, however, piqued my interest. In it, a guy who’d done his PhD thesis on decision making talked about how when you evaluate the outputs of decisions, to determine whether the outcome was good, you needed values.

Now this to me ties very closely back to the Sternberg model of wisdom. There, you evaluate both short- and long-term implications, not just for you and those close to you but more broadly, and with an  explicit  consideration of values.

A conversation after the event formally concluded cleared up the DI issue. It apparently is not training up one big machine learning network to make a decision, but instead having the disparate components of the decision modeled separately and linking them together conceptually. In short, DI is about knowing what makes a good decision and using it. That is, being very clear on the decision making framework to optimize the likelihood that the outcome is right.

And, of course, you analyze the decision afterward to evaluate the outcomes. You do the best you can with DI, and then determine whether it was right with DA. Ok, I can go with that.

What intrigues me, of course, is how we might use technology here.  We can provide guidelines about good decisions, provide support through the process, etc. And, if we we want to move from smart to  wise decisions, we bring in values explicitly, as well as long-term and broad impacts. (There was an interesting diagram where the short term result was good but the long term wasn’t, it was the ‘lobster claw’.)

What would be the outcome of wiser decisions?  I reckon in the long term, we’d do better for all of us. Transparency helps, seeing the values, but we’d like to see the rationale too. I’ll suggest we can, and should, be building in support for making wiser decisions. Does that sound wise to you?

Post popularity?

18 September 2018 by Clark 1 Comment

My colleague, Will Thalheimer, asked what posts were most popular (if you blog, you can participate too).  For complicated reasons, I don’t have Google Analytics running.  However, I found I have a WordPress plugin called Page Views. It helpfully can list my posts by number of guest views.  I was surprised by the winner (and less so by the runner up). So it makes me wonder what leads to post popularity.

The winner was a post titled  New Curricula?  In it, I quote a message from a discussion that called for meta-cognitive and leadership skills, and briefly made the case to support the idea.  I certainly don’t think it was one of my most eloquent calls for this. Though, of course, I do believe in it.  So why?  I have to admit I’m inclined to believe that folks, searching on the term, came to this post rather than it was so important on it’s own merits.

Which isn’t the case with the post that had the second most views.  This one, titled  Stop creating, selling, and buying garbage!, was a rant about our industry. And this one, I believe, was popular because it could be viewed as controversial, or at least, a strong opinion.  I was trying to explain why we have so much bad elearning (c.f. the  Serious eLearning Manifesto), and talking about various stakeholders and their hand in perpetuating the sorry state of affairs.

Interestingly, I won an award last year for my post on AR (yes, I was on the committee, but we didn’t review our own).  And, I was somewhat flummoxed on that one too. Not that there weren’t good thoughts in it, but it was pretty simple in the mechanism: I (digitally) drew on some photos!  Yet clearly that made something concrete that folks had wondered about.

Of course, I think there’s also some luck or fate in it as well. Certainly, the posts I think are most interesting aren’t the ones others perceive.  But then, I’m biased. And perhaps some are used in a class so you get a number of people pointed to it or something. I really have no way to know.  I note that the posts here at Learnlets are more unformed thoughts, and my attempts at more definitive thoughts appear at the Litmos blog and now at my Quinnsights columns at Learning Solutions.

I’ll be interested in Will’s results (regardless of whether my data makes it in, because without analytics I couldn’t answer some of his questions).  And, of course, I welcome any thoughts you have about what makes a post popular (beyond SEO :), and/or what you’d  like to read!

Revisiting personal learning

11 September 2018 by Clark 2 Comments

A number of years ago, I tried a stab at an innovation process. And I was reminded of it thinking about personal learning, and looked at it again. And it doesn’t seem to aged well. So I thought I’d revisit the model, and see what emerged. So here’s a mindmap of personal learning, and the associated thinking.

The earlier 5R model was based on Harold Jarche’s Seek-Sense-Share model, a deep model that has many rich aspects. I had reservations about the labels, and I think it’s sparse at either end.  (And, I worked to hard to try to keep it to ‘R’s, and  Reify just doesn’t work for me. ;)

Personal learning

In this new approach, I have a richer representation at either end. My notion of ‘seek’ (yes, I’m still using Harold’s framework, more at the end) has three different aspects. First is ‘info flows’. This is setting up the streams you will monitor. They’re filters on the overwhelming overload of info available. They’re your antenna for resonating with interesting new bits. You can also search for information, using DuckDuckGo or Google, or going straight to Wikipedia or other appropriate information resources you know. And, of course, you can ask, using your network, or Quora, or in any social media platform like LinkedIn or Facebook.  And there’re are different details in each.

To make sense of the information, you can do either or both of representing your understanding and experimenting. Representing is a valuable way to process what you’re hearing, to make it concrete. Experimenting is putting it to the test. And you naturally do both; for instance read a web page telling you how to do something that’s new, and you put it into practice and see if it works. Both require reflection, but getting concrete in trying it out or rendering it is valuable. Again, representing and experimenting break down into further details.

What you learn can (and often should) be shared. At whatever stage you’re at, there’s probably someone who would benefit from what you’ve learned.  You can post it publicly (like this blog), or circulate it to a well-selected set of individuals (and that can range from one other person to a small group or some channel that’s limited).  Or you can merely have it in readiness so that if someone asks, you can point them to your thoughts. Which is different than pointing them to some other resource, which is useful, but not necessarily learning. The point is to have others providing feedback on where you’re at.

I looked at Harold’s model more deeply after I did this exercise (a meta-learning on it’s own; take your own stab and then see what others have done).  I realize mine is done on sort of a first-principles basis from a cognitive perspective, while his is richer, being grounded in others’ frameworks. Harold’s is also more tested, having been used extensively in his well-regarded workshop.

I note that part of the meta-learning here is the ongoing monitoring of your own processes (the starred grey clouds). This is a key part of Harold’s workshop, by the way. Looking at your processes and evaluating them. An early exercise where you evaluate your own network systematically, for instance, struck me as really insightful. I’m grateful he was willing to share his materials with me.

So, this has been my sensing and sharing, so I hope you’ll take the opportunity to provide feedback!  What am I missing?

 

 

Are Decisions the Key?

4 September 2018 by Clark 2 Comments

A number of years ago, now, Brenda Sugrue posited that Bloom’s Taxonomy was wrong. And, she proposed a simpler framework. I’ve never been a fan of Bloom’s; folks have trouble applying it systematically (reliably discriminating between various levels). And, while it pushed for higher levels, it left people off the hook if they decided a lower level would do it. Sugrue first proposed a simpler taxonomy, and also an alternative that was just performance. In her later version, she’s aligned the former to the work of the Science of Learning Center’s KLI (knowledge-learning-instruction) framework. But I want to go back to her ‘pure performance’ model, and make a the case that decisions are key, that they  are necessary but also sufficient.

So her latest model discriminates between concept, process, fact, principle, etc.  And, I would agree, there are likely different pedagogies applied to each.  Is that a basis enough?  Let me suggest a different approach, because I don’t see how they differ in one meaningful way. For each, you need to take some action, whether it’s to:

  • classify as a fact (is it a this or a that)
  • perform the steps (which action to take now)
  • trouble shoot the process (what experiment now)
  • predict an outcome (what will happen)

Note, however, that for each, there’s an associated decision. And that, to me, is core.  Now, I’m not claiming that they all require the same approach.  For instance, to help people deal with ambiguous decisions, I suggested a collaborative approach to discuss the parameters and unpack the thinking. To teach trouble-shooting, I would give some practice making conceptual decisions about the systems that could cause the observed symptoms. In internal combustion engines (read: cars), if it’s not running, is the air/fuel system or the electricity? How could you narrow that down?  In a diesel, you could eliminate the electrical ;).

Van Merriënboer, in his Four Component Instructional Design, talks about the knowledge you need and the complex decisions you apply that to. I agree, and so it’s not  just  about decisions. However, even the knowledge needs to be applied to stick.  To test that learners have acquired the underpinning knowledge, you can hav them exercising the models in decisions.

Ok, so you might want to short-circuit the mapping from decision to practice. I think a good heuristic (ala Cathy Moore’s Action Mapping) is just to have them do what they need to do, and give them the necessary information. However, if you want to create a ‘cheat sheet’ to accelerate performance and success, with learning goals and associated pedagogies, I won’t quibble.

Now, you can’t provide all the situations, so you need to choose the right ones that will help facilitate abstraction and transfer. You may need to also ensure that they know the requisite information, so you may need to determine that, but I think exercising the models in simpler situations helps develop them more than just a presentation.

I’m suggesting that focusing radically on decisions is the best way to work with SMEs, and is the best guide for designing practice (e.g. put learners in situations to make decisions). Everything else revolves around that. Now, are these categories reliable  types of decisions?  Will ponder. Your thoughts?

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok