Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Labels, models, and drives

16 October 2018 by Clark Leave a Comment

In my post last week on engagement, I presented the alignment model from my  Engaging Learning  book on designing learning experiences. And as I thought about the post, I pondered several related things about labels, models, and drives. I thought I’d wrestle with them ‘out loud’ here, and troll (in the old sense) to see what you think.

Some folks have branded a model and lived on that for their career. And, in a number of cases, that’s not bad: they’re useful models and their applicability hasn’t diminished. And while, for instance, I think that alignment model is as useful as most models I’ve seen, I didn’t see any reason to tie my legacy to it, because the principles I like to comprehend and then apply to create solutions aren’t limited to just engagement. Though I wonder if people would find it easier to put the model in practice if it had a label.  The Quinn Engagement model or somesuch?

I’ve also created models around mobile, and about performance ecosystems, and more. I can’t say that they’re all original (e.g. the 4Cs of mobile), though I think they have utility. And some have labels (again, the 4Cs, Least Assistance Principle…) Then the misconceptions book is very useful, but the coverage there isn’t really mine, either. It’s just a useful compendium. I expect to keep creating models. But it’d led to another thought…

I’ve seen people driven to build companies. They just keep doing it, even if they’ve built one and sold it, they’re always on it; they’re serial entrepreneurs. I, for instance, have no desire to do that. There are elements to that that aren’t me.    Other folks are driven to do research: they have a knack for designing experiments that tease out the questions that drive them to find answers. And I’ve been good at that, but it’s not what makes my heart beat faster. I do  like action research, which is about doing with theory, and reflecting back. (I also like helping others become able to do this.)

What I’m about is understanding and applying cognitive science (in the broad sense) to help people do important things in ways that are enabled by new technologies.  Models that explain disparate domains are a hobby. I like finding ways to apply them to solve new problems in ways that are insightful but also pragmatic.   If I create models along the way (and I do), that’s a bonus. Maybe I should try to create a model about applying models or somesuch. But really, I like what I do.

The question I had though, is whether anyone’s categorized ‘drives’.  Some folks are clearly driven by money, some by physical challenges. Is there a characterization?  Not that there needs to be, but the above chain of thought led me to be curious. Is there a typology of drives? And, of course, I’m skeptical if there is one (or more), owing to the problems with, for instance, personality types and learning styles :D. Still, welcome any pointers.

Another Day Another Myth-Ridden Hype Piece

9 October 2018 by Clark 1 Comment

Some days, it feels like I’m playing whack-a-mole. I got an email blast from an org (need to unsubscribe) that included a link that just reeked of being a myth-ridden piece of hype.  So I clicked, and sure enough!  And, as part of my commitment to showing my thinking, I’m taking it down. I reckon it’s important to take these myths apart, to show the type of thinking we should avoid if not actively attack.  Let me know if you don’t think this is helpful.

The article starts by talking about millennials. That’s a problem right away, as millennials is an arbitrary grouping by birthdate, and therefore is inherently discriminatory. The boundaries are blurry, and most of the differences can be attributed to age, not generation. And that’s a continuum, not a group. As the data shows.  Millennials is a myth.

Ok, so they go on to say: “Changing the approach from adapting to Millennials to leveraging Millennials is the key…”  Ouch!  Maybe it’s just me, but while I like to leverage assets, I think saying that about people seems a bit rude.  Look, people are people!  You work with them, develop them, etc. Leverage them?  That sounds like you’re using them (in the derogatory sense).

They go on to talk about Learning Organizations, which I’m obviously a fan of.  And so the ability to continue to learn is important.  No argument. But why would that be specific to ‘millennials’?  Er…

Here’s another winner: “They natively understand the imperative of change and their clockspeed is already set for the accelerated learning this requires.”  This smacks of the ‘digital native’ myth.  Young people’s wetware isn’t any different than anyone else’s. They may be more comfortable with the technology, but making assumptions such as this undermines the fact that any one individual may not fit the group mean. And it’s demonstrable that their information skills aren’t any better because of their age.

We move on to 3 ways to leverage millennials:

  1. Create Cross-pollination through greater teamwork.  Yeah, this is a good strategy.  FOR EVERYONE. Why attribute it just to millennials?  Making diverse teams is just good strategy, period. Including diversity by age? Sure. By generation?  Hype. You see this  also with the ‘use games for learning’ argument for millennials. No, they’re just better learning designs! (Ok, with the caveat: if done well.)
  2. Establish a Feedback-Driven Culture to Learn and Grow Together. That’s a fabulous idea; we’re finding that moving to a coaching culture with meaningful assignments and quick feedback (not the quarterly or yearly) is valuable. We can correct course earlier, and people feel more enagaged. Again,  for everyone.
  3. Embrace a Trial-and-Error Approach to Learning to Drive Innovation. Ok, now here I think it’s going off the rails. I’m a fan of experimentation, but trial and error can be smart or random. Only one of those two makes sense. And, to be fair, they do argue for good experimentation in terms of rigor in capturing data and sharing lessons learned. It’s valuable, but again, why is this unique to millennials? It’s just a good practice for innovation.

They let us know there are 3 more ways they’ll share in their next post.  You can imagine my anticipation.  Hey, we can read  two  posts with myths, instead of just one.  Happy days!

Yes, do the right things (please), but  for the right reasons. You could be generous and suggest that they’re using millennials as a stealth tactic to sneak in messages about modern workplace learning.  I’m not, as they seem to suggest doing this largely with millennials. This sounds like hype written by a marketing person. And so, while I advocate the policies, I eschew the motivation, and therefore advise you to find better sources for your innovation practices. Let me know if this is helpful (or not ;).

Why Myths Matter

3 October 2018 by Clark 3 Comments

I’ve called out a number of myths (and superstitions, and misconceptions) in my latest tome, and I’m grateful people appear to be interested.  I take this as a sign that folks are beginning to really pay attention to things like good learning design. And that’s important. It’s also  important not to minimize the problems myths can create. I do that in my presentations, but I want to go a bit deeper.  We need to care about why myths matter to limit our mistakes!

It’s easy to think something like “they’re wrong, but surely they’re harmless”.  What can a few misguided intentions matter?  Can it hurt if people are helped to understand if people are different?  Won’t it draw attention to important things like caring for our learners?  Isn’t it good if people are more open-minded?

Would that this were true. However, let me spin it another way: does it matter if we invest in things that don’t have an impact?  Yes, for two reasons.  One, we’re wasting time and money. We will pay for workshops and spend time ensuring our designs have coverage for things that aren’t really worthwhile. And that’s both profligate and unprofessional.  Worse, we’re also not investing in things that might actually matter.  Like, say,  Serious eLearning. That is, research-derived principles about what  actually works. Which is what we should be getting dizzy about.

But there are worse consequences. For one, we could be undermining our own design efforts. Some of these myths may have us do things that undermine the effectiveness of our work. If we work too hard to accommodate non-existent ‘styles’, for instance, we might use media inappropriately. More problematic, we could be limiting our learners. Many of the myths want to categorize folks: styles, gender, left/right brain, age, etc.  And, it’s true, being aware of how diversity strengthens is important. But too often people go beyond; they’ll say “you’re an XYZ”, and people will self-categorize and consequently self-limit.  We could cause people not to tap into their own richness.

That’s still not the worst thing. One thing that most such instruments explicitly eschew is being used as a filter: hire/fire, or job role. And yet it’s being done. In many ways!  This means that you might be limiting your organization’s diversity. You might also be discriminatory in a totally unjustifiable way!

Myths are not just wasteful, they’re harmful. And that matters.  Please join me in campaigning for legitimate science in our profession. And let’s chase out the snake oil.  Please.

Wise technology?

25 September 2018 by Clark Leave a Comment

At a recent event, they were talking about AI (artificial intelligence) and DI (decision intelligence). And, of course, I didn’t know what the latter was so it was of interest. The description mentioned visualizations, so I was prepared to ask about the limits, but the talk ended up being more about decisions (a topic I  am interested in) and values. Which was an intriguing twist. And this, not surprisingly led me back to wisdom.

The initial discussion talked about using technology to assist decisions (c.f. AI), but I didn’t really comprehend the discussion around decision intelligence. A presentation on DA, decision analysis, however, piqued my interest. In it, a guy who’d done his PhD thesis on decision making talked about how when you evaluate the outputs of decisions, to determine whether the outcome was good, you needed values.

Now this to me ties very closely back to the Sternberg model of wisdom. There, you evaluate both short- and long-term implications, not just for you and those close to you but more broadly, and with an  explicit  consideration of values.

A conversation after the event formally concluded cleared up the DI issue. It apparently is not training up one big machine learning network to make a decision, but instead having the disparate components of the decision modeled separately and linking them together conceptually. In short, DI is about knowing what makes a good decision and using it. That is, being very clear on the decision making framework to optimize the likelihood that the outcome is right.

And, of course, you analyze the decision afterward to evaluate the outcomes. You do the best you can with DI, and then determine whether it was right with DA. Ok, I can go with that.

What intrigues me, of course, is how we might use technology here.  We can provide guidelines about good decisions, provide support through the process, etc. And, if we we want to move from smart to  wise decisions, we bring in values explicitly, as well as long-term and broad impacts. (There was an interesting diagram where the short term result was good but the long term wasn’t, it was the ‘lobster claw’.)

What would be the outcome of wiser decisions?  I reckon in the long term, we’d do better for all of us. Transparency helps, seeing the values, but we’d like to see the rationale too. I’ll suggest we can, and should, be building in support for making wiser decisions. Does that sound wise to you?

Post popularity?

18 September 2018 by Clark 1 Comment

My colleague, Will Thalheimer, asked what posts were most popular (if you blog, you can participate too).  For complicated reasons, I don’t have Google Analytics running.  However, I found I have a WordPress plugin called Page Views. It helpfully can list my posts by number of guest views.  I was surprised by the winner (and less so by the runner up). So it makes me wonder what leads to post popularity.

The winner was a post titled  New Curricula?  In it, I quote a message from a discussion that called for meta-cognitive and leadership skills, and briefly made the case to support the idea.  I certainly don’t think it was one of my most eloquent calls for this. Though, of course, I do believe in it.  So why?  I have to admit I’m inclined to believe that folks, searching on the term, came to this post rather than it was so important on it’s own merits.

Which isn’t the case with the post that had the second most views.  This one, titled  Stop creating, selling, and buying garbage!, was a rant about our industry. And this one, I believe, was popular because it could be viewed as controversial, or at least, a strong opinion.  I was trying to explain why we have so much bad elearning (c.f. the  Serious eLearning Manifesto), and talking about various stakeholders and their hand in perpetuating the sorry state of affairs.

Interestingly, I won an award last year for my post on AR (yes, I was on the committee, but we didn’t review our own).  And, I was somewhat flummoxed on that one too. Not that there weren’t good thoughts in it, but it was pretty simple in the mechanism: I (digitally) drew on some photos!  Yet clearly that made something concrete that folks had wondered about.

Of course, I think there’s also some luck or fate in it as well. Certainly, the posts I think are most interesting aren’t the ones others perceive.  But then, I’m biased. And perhaps some are used in a class so you get a number of people pointed to it or something. I really have no way to know.  I note that the posts here at Learnlets are more unformed thoughts, and my attempts at more definitive thoughts appear at the Litmos blog and now at my Quinnsights columns at Learning Solutions.

I’ll be interested in Will’s results (regardless of whether my data makes it in, because without analytics I couldn’t answer some of his questions).  And, of course, I welcome any thoughts you have about what makes a post popular (beyond SEO :), and/or what you’d  like to read!

Revisiting personal learning

11 September 2018 by Clark 2 Comments

A number of years ago, I tried a stab at an innovation process. And I was reminded of it thinking about personal learning, and looked at it again. And it doesn’t seem to aged well. So I thought I’d revisit the model, and see what emerged. So here’s a mindmap of personal learning, and the associated thinking.

The earlier 5R model was based on Harold Jarche’s Seek-Sense-Share model, a deep model that has many rich aspects. I had reservations about the labels, and I think it’s sparse at either end.  (And, I worked to hard to try to keep it to ‘R’s, and  Reify just doesn’t work for me. ;)

Personal learning

In this new approach, I have a richer representation at either end. My notion of ‘seek’ (yes, I’m still using Harold’s framework, more at the end) has three different aspects. First is ‘info flows’. This is setting up the streams you will monitor. They’re filters on the overwhelming overload of info available. They’re your antenna for resonating with interesting new bits. You can also search for information, using DuckDuckGo or Google, or going straight to Wikipedia or other appropriate information resources you know. And, of course, you can ask, using your network, or Quora, or in any social media platform like LinkedIn or Facebook.  And there’re are different details in each.

To make sense of the information, you can do either or both of representing your understanding and experimenting. Representing is a valuable way to process what you’re hearing, to make it concrete. Experimenting is putting it to the test. And you naturally do both; for instance read a web page telling you how to do something that’s new, and you put it into practice and see if it works. Both require reflection, but getting concrete in trying it out or rendering it is valuable. Again, representing and experimenting break down into further details.

What you learn can (and often should) be shared. At whatever stage you’re at, there’s probably someone who would benefit from what you’ve learned.  You can post it publicly (like this blog), or circulate it to a well-selected set of individuals (and that can range from one other person to a small group or some channel that’s limited).  Or you can merely have it in readiness so that if someone asks, you can point them to your thoughts. Which is different than pointing them to some other resource, which is useful, but not necessarily learning. The point is to have others providing feedback on where you’re at.

I looked at Harold’s model more deeply after I did this exercise (a meta-learning on it’s own; take your own stab and then see what others have done).  I realize mine is done on sort of a first-principles basis from a cognitive perspective, while his is richer, being grounded in others’ frameworks. Harold’s is also more tested, having been used extensively in his well-regarded workshop.

I note that part of the meta-learning here is the ongoing monitoring of your own processes (the starred grey clouds). This is a key part of Harold’s workshop, by the way. Looking at your processes and evaluating them. An early exercise where you evaluate your own network systematically, for instance, struck me as really insightful. I’m grateful he was willing to share his materials with me.

So, this has been my sensing and sharing, so I hope you’ll take the opportunity to provide feedback!  What am I missing?

 

 

Are Decisions the Key?

4 September 2018 by Clark 2 Comments

A number of years ago, now, Brenda Sugrue posited that Bloom’s Taxonomy was wrong. And, she proposed a simpler framework. I’ve never been a fan of Bloom’s; folks have trouble applying it systematically (reliably discriminating between various levels). And, while it pushed for higher levels, it left people off the hook if they decided a lower level would do it. Sugrue first proposed a simpler taxonomy, and also an alternative that was just performance. In her later version, she’s aligned the former to the work of the Science of Learning Center’s KLI (knowledge-learning-instruction) framework. But I want to go back to her ‘pure performance’ model, and make a the case that decisions are key, that they  are necessary but also sufficient.

So her latest model discriminates between concept, process, fact, principle, etc.  And, I would agree, there are likely different pedagogies applied to each.  Is that a basis enough?  Let me suggest a different approach, because I don’t see how they differ in one meaningful way. For each, you need to take some action, whether it’s to:

  • classify as a fact (is it a this or a that)
  • perform the steps (which action to take now)
  • trouble shoot the process (what experiment now)
  • predict an outcome (what will happen)

Note, however, that for each, there’s an associated decision. And that, to me, is core.  Now, I’m not claiming that they all require the same approach.  For instance, to help people deal with ambiguous decisions, I suggested a collaborative approach to discuss the parameters and unpack the thinking. To teach trouble-shooting, I would give some practice making conceptual decisions about the systems that could cause the observed symptoms. In internal combustion engines (read: cars), if it’s not running, is the air/fuel system or the electricity? How could you narrow that down?  In a diesel, you could eliminate the electrical ;).

Van Merriënboer, in his Four Component Instructional Design, talks about the knowledge you need and the complex decisions you apply that to. I agree, and so it’s not  just  about decisions. However, even the knowledge needs to be applied to stick.  To test that learners have acquired the underpinning knowledge, you can hav them exercising the models in decisions.

Ok, so you might want to short-circuit the mapping from decision to practice. I think a good heuristic (ala Cathy Moore’s Action Mapping) is just to have them do what they need to do, and give them the necessary information. However, if you want to create a ‘cheat sheet’ to accelerate performance and success, with learning goals and associated pedagogies, I won’t quibble.

Now, you can’t provide all the situations, so you need to choose the right ones that will help facilitate abstraction and transfer. You may need to also ensure that they know the requisite information, so you may need to determine that, but I think exercising the models in simpler situations helps develop them more than just a presentation.

I’m suggesting that focusing radically on decisions is the best way to work with SMEs, and is the best guide for designing practice (e.g. put learners in situations to make decisions). Everything else revolves around that. Now, are these categories reliable  types of decisions?  Will ponder. Your thoughts?

Transparency isn’t enough

30 August 2018 by Clark Leave a Comment

wet foggy windowOf late, there has been a number of articles talking about thinking and mental models (e.g. this one). One of the outcomes is that we have a lot of stories about how the world works.  Some of them are accurate. Others, not. And pondering this when I should’ve been sleeping, I realized that there was a likelihood that our misinterpretations could cause problems. It made me think that maybe transparency isn’t enough. What does that mean?

We build models, period. We create explanations about how the world works. And they may not be right.  If we aren’t given good ones up front, it’s likely. It’s also the case that they seem to come from previous models we’ve seen. (And diagrams. ;)

Now, it’s easy to misattribute an outcome to the wrong model if we don’t have better explanations. And this comes into play when we’re trying to figure out what has happened, or why something happened. This includes decisions made by others that may affect us, or even just lead to outcomes such as product designs, policies, or more.

Where I’m going is this: if we don’t see the thinking that explains how we got there, not just the process followed, we can infer wrongly about  why it happened. And this is important in the ‘show your work’ sense.

I’m a fan of transparency. I like it when politics and other decisions are scrutable; we can see who’s making the decision, what influences they’ve had, what steps they took to get there. That’s not enough, however. Particularly when you disagree or have a problem. Take LinkedIn, for example; when I connect to someone using the app on the qPad, I can then send them a message, but when I do it through the web interface on my computer, it wants to use one of those precious ‘InMail’s.  It’s inconsistent (read: frustrating). Is there a rationale?

So I’m going to suggest that just transparency is necessary, but not sufficient. You can’t just show your work, you need to show your thinking. You need to see the rationale!  Two reasons: you can learn more when you see the associated cogitation, and you can provide better feedback as well.  In short, we want to see  why they believe this is the right solution. Otherwise, we could question their decision because we misattribute the reasoning.

Transparency is great, but if you can’t see the thinking behind it, you can make wrong inferences.  It’s better if you can see the thinking  and the result. Is this transparent enough on both?

Realities: Why AR over VR

29 August 2018 by Clark 3 Comments

In the past, I’ve alluded to why I like Augmented Reality (AR) over Virtual Reality. And in a conversation this past week, I talked about realities a bit more, and I thought I’d share. Don’t get me wrong, I like VR  alot, but I think AR has the bigger potential impact.  You may or may not agree, but here’s my thinking.

In VR, you create a completely artificial context (maybe mimicking a real one).  And you can explore or act on these worlds. And the immersiveness has demonstrably improved outcomes over a non-immersive experience.  Put to uses for learning, where the affordances are leveraged appropriately, they can support  deep practice. That is, you can minimize transfer to the real world, particularly where 3D is natural. For situations where the costs of failure are high (e.g. lives), this is  the best practice before mentored live performance. And, we can do it for scales that are hard to do in flat screens: navigating molecules or microchips at one end, or large physical plants or astronomical scales at the other. And, of course, they can be completely fantastic, as well.

AR, on the other hand, layers additional information on  top of our existing reality. Whether with special glasses, or just through our mobile devices, we can elaborate on top of our visual and auditory world.  The context exists, so it’s a matter of extrapolating on it, rather than creating it whole. On the other hand, recognizing and aligning with existing context is hard.  Yet, being able to make the invisible visible where you already are, and presumably are for a reason that makes it intrinsically motivating, strikes me as a big win.

First, I think that the learning outcomes from VR are great, and I don’t mean to diminish them. However, I wonder how general they are, versus being specific to inherently spatial, and potentially social, learning.  Instead, I think there’s a longer term value proposition for AR. There’s less physical overhead in having your world annotated versus having to enter another one. While I’m not sure which will end up having greater technical overhead, the ability to add information to a setting to make it a learning one strikes me as a more generalizable capability.  And I could be wrong.

Another aspect is of interest to me, too. So my colleague was talking about mixed reality, and I honestly wondered what that was. His definition sounded like  alternate reality, as in alternate reality games. And that, to me, is also a potentially powerful learning opportunity. You can create a separate, fake but appearing real, set of experiences that are bound by story and consequences of action that can facilitate learning. We did it once with a sales training game that intruded into your world with email and voicemail. Or other situations where you have situations and consequences that intrude into your world and require decisions and actions. They don’t have  real consequences, but they do impact the outcomes. And these could be learning experiences too.

At core, to me, it’s about providing either deep practice or information at the ‘teachable moment’. Both are doable and valuable. Maybe it’s my own curiosity that wants to have information on tap, and that’s increasingly possible. Of course, I love a good experience, too. Maybe what’s really driving me is that if we facilitate meta-learning so people are good self-learners, having an annotated world will spark more ubiquitous learning. Regardless, both realities are good, and are either at the cusp or already doable.  So here’s to real learning!

Question: values?

22 August 2018 by Clark Leave a Comment

So, I’m wrestling with how to characterize useful changes in an organization. I’ve been compiling a list of different tactics (e.g. implement coaching, show-your-work, support curation, etc), and want to map them to the changes you’ll get in the organization. I’ve wanted to tie them to another set of various outcomes: improved participation, innovation, etc. But, while I have the strategies, I’m looking at what breakdowns of outcomes are some minimal useful set. I’ll lay out my  very preliminary set of thoughts around the values we’re trying to develop/influence, and I welcome input, pointers, what have you.

My goal, I should be clear, is to try to take specific changes we want in an organization, and have them linked to specific tactics.  And, of course, a new school approach.  That is, tactics that move organizations into directions that create learning organizations.

I start with the three elements Dan Pink talks about in his book  Drive.  In it, he lists three core motivators of employees: Purpose, Autonomy, and Mastery (this is my order, not his).  Purpose is  why what you’re doing matters.  What does this do for the org, and that what the org is doing also matters. Then, autonomy is when you’re given the freedom to pursue your purposes.  Now, you may not be completely capable of that, so there’s support for mastery, to develop the capabilities to succeed. I think these are all great, but are they sufficient in and of themselves? Are these the right things to want to impact?

I’m also a fan of Amy Edmondson’s quadrant model of psychological safety and accountability. Without either, you’re loafing. With just safety, you’re happy. With just accountability, you’re fearful. But if you’ve accountability  and  safety, you get results.  This draws upon the richer work of Garvin, Gino, and Edmondson on the components of innovation.  That model adds time for reflection, diversity, and openness to new ideas. Is this a better way to think about it?

There’re also personal values (which might be organizational, too).  Barack Obama, in his keynote to ATD 2018, had two very simple ones: be kind, and be useful.  I’ve extended that out one notch, to include three: responsibility (do the right thing, and  do something [useful]), integrity (honesty, do what you promise), and compassion (respect, helping, etc [kind]).  Is that a full set? Or is responsibility derivable from integrity? I’ve a collection of a suite of value proposals (five, with entries ranging from 5 – 8 core values).  Can you derive some of the others from the three I have? E.g. does courage come from integrity and responsibility? Does fairness come from compassion and integrity?  I don’t know.

And so, I’m not sure what the  right core set is.  Trust has to be in there somehow, but is that derivative from integrity?  And do I frame it from the change we want in the org, or the change in the people?  I’m inclined to the former.  And are they unitary, or can the tactics impact more than one? (Preliminary: more than one.)

Obviously, I’m at an early stage in formulating this.  I can beaver away on it on my own, but I’m happy to hear pointers, thoughts, etc.  Yes, I’m trying to diagram it too, but nothing coherent has  yet emerged.  So, once again, this is me ‘thinking out loud’.  Care to do similarly and share?

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok