Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Confounding generations?

8 December 2015 by Clark 1 Comment

At the recent Online Educa Berlin, Laura Overton of Towards Maturity  presented some stats in our joint session.  While she mentioned that she really had to look for results where there were differences by age, she  of course found some. (Which already is a problem;  5% of results are likely to be significant by random chance!). However, in at least one case I think the results is explained by another factor than generations (not that she was making the claim).  In those statistics was an interesting result that I want to look at from two different perspectives.

So, this result, one of the most striking, was that 64% of those 21-30 were motivated to learn to obtain certification, while only 22% of those over 50 were so motivated.  That really seems like to might fit the generational differences story, where over 50s, the baby boomers, differ from the millennials.  Here, the millennials are worried that the world is not a safe place, and want accreditation to help preserve their access (my rough story based upon millennial descriptions). And the baby boomers are more positive and trusting, so consequently feel less drive for certification. Or create your own explanation for the divergence based upon the differences between the generations.

Ok, what struck me is that there’s a totally different explanation: those in the 21-30 range are young and new. They want certifications to support their advancements, as they don’t have a lot of experience.  Those who are older have real experience to point to, and have less need for external validation of their learning.  Here what we’re seeing is that this is not related to generations, but by age.  And that’s very different explanation for the same phenomena.

The core point is that if the generational explanation would be true,  this would stay true as these generations aged. The millennials, at age 50, would still care more about certifications.   If it’s more a ‘stage of life’ thing, as they aged they’d care less, but those folks who were growing into that younger range would also demonstrate the differences.

The problem is that there are confounding explanations for the same data.   So what else do we look at?  Interestingly, in my research about what the  data says, I’ve found several studies that show that when you ask folks what they value in the workplace,  there is no significant difference by generation.  That is, generations as defined by societal circumstances at the time of growing up doesn’t have an impact on workplaces.

Now, there have been a few exceptions, including the above (and I’ll reiterate, Laura wasn’t make a generational claim for this), but the question then becomes whether there are other explanations for the differences, such as age, not context.  Could other factors, such as natural age differences, create a perception of generational differences that truly isn’t persistent?

Ok, I’ll buy that WWII was a global event and the impacts were clear and measured.  But other than that, sure there were landmark popular culture elements and zeitgeists, but I think most of the other defining characteristics are nowhere near as clearly delineated in impact (I’ve heard claims of divorce, latchkey kids, etc being generational factors), and I  doubt that they’re sufficiently delineated to create the defining characteristics that are proposed.

My take home?  Be suspicious of someone pushing a particular viewpoint without scrutiny of alternate hypotheses (including mine).  There may be a better explanation than the one someone has a vested interest in pushing.  Is there a real millennial difference?  Certainly the so-called ‘digital native’ myth has been debunked (e.g. no better at search queries or evaluating results of same than any others), so maybe we want to be wary of other claims.  I’m willing to be wrong on this, but my research says that the data seems to point to other explanations than defining generations.  What say you?

Useful cognitive overhead

2 December 2015 by Clark 2 Comments

As I’ve reported before, I started mind mapping keynotes not as a function of filling the blog, but for listening better.  That is, without the extra processing requirement of processing the talk into a structure, my mind was (too) free to go wandering. I only posted it because I thought I should do  something with it!  And I’ve realized there’s another way I leverage cognitive overhead.

As background, I diagram.  It’s one of the methods I use to reflect.  A famous cognitive science article talked about how diagrams are representations that map conceptual relationships to spatial ones, to use the power of our visual system to facilitate comprehension. And that’s what I do, take something I’m trying to understand, some new thoughts I have, and get concrete about them.  If I can map them out, I feel like I’ve got my mind around them.

I use them to communicate, too. You’ve seen them here in my blog (or will if you browse around a bit), and in my presentations.  Naturally, they’re a large part of my workshops too, and even reports and papers.  As I believe models composed of concepts are powerful tools for understanding the world, I naturally want to convey them to support people in applying them themselves.

Now, what I realized (as I was diagramming) is that the way I diagram actually leverages cognitive overhead in a productive way. I use a diagramming tool (Omnigraffle if you must know, expensive but works well for me) to create them, and there’s some overhead in getting the diagram components sized, and located, and connected, and colored, and…  And in so doing, I’m allowing time for my thoughts to coalesce.

It doesn’t  work with paper, because it’s hard to edit, and what comes out isn’t usually right at first.  I move things around, break them up, rethink the elements.  I can use a whiteboard, but usually to communicate a diagram already conceived.  Sometimes I can capture new thinking, but it’s easy to edit a whiteboard. Flip charts are consequently more problematic.

So I was unconsciously leveraging the affordances of the tool to help allow my thinking to ferment/percolate/incubate (pick your metaphor).  Another similar approach is to seed a question you want to answer or a thought you want to ponder before some activity like driving, showering, jogging, or the like.  Our unconscious brain works powerfully in the background, given the right fodder.  So hopefully this gives you some mental fodder too.

Templates and tools

1 December 2015 by Clark 2 Comments

A colleague who I like and respect recently tweeted: “I can’t be the only L&D person who shudders when I hear the word ‘template'”, and I felt vulnerable because I’ve recently been talking about templates.   To be fair, I have a different meaning than most of what’s called a ‘template’, so I thought perhaps I should explain.

Let’s be clear: what’s typically referred to as a template is usually a simple screen type for a rapid authoring tool.  That is, it allows you to easily fill in the information and generate a particular type of interaction: drag-and-drop, multiple-choice, etc.  And this can be useful when you’ve got well-designed activities but want to easily develop them.  But they’re not a substitute for good design, and can make it easy to do bad design too. Worse are those skins that add gratuitous visual elements (e.g. a ‘racing’ theme) to a series of questions in some deluded view that such window dressing has any impact on anything.

So what  am  I talking about?  I’m talking about templates that help reinforce the depth of learning science around the elements. I’m talking about templates for: introductions that ask for the emotional opener, the drill-down from the larger context, etc; practices that are contextualized, meaningful to learner, differentiated response options and specific feedback, etc; etc.  This could be done in other ways, such as a checklist, but putting it into the place where you’re developing strikes me as a better driver ;).  Particularly if it is embedded in the house ‘style’, so that the look and feel is tightly coupled to learner experience.

Atul Gawande, in his brilliant  The Checklist Manifesto, points out how there are gaps in our mental processing that means we can skip steps and forget to coordinate.  Whether the guidelines are in a template or a process tool like a checklist, it helps to have cognitive facilitation.  So what I’m talking about is  not a template that says how it’s to look, but instead what it should contain. There are ways to combine intrinsic motivation openings with initial practice, for instance.

Templates don’t have to stifle creativity, they can serve to improve quality instead.  As big a fan as I am of creativity, I also recognize that we can end up less than optimal if there isn’t some rigor  in our approach.  (Systematic creativity is  not an oxymoron!)  In fact, systematicity in the creative process can help optimize the outcomes. So however you want to scaffold quality and creativity, whether through templates or other tools, I do implore you to put in place support to ensure the best outcomes for you and  your audience.

Evidence for benefits: Towards Maturity Report

30 November 2015 by Clark 1 Comment

An organization that I cited in the Revolution book, Towards Maturity, has recently released their 2015-2016 Industry Benchmark Report, and it’s of interest to individuals and organizations looking for real data on what’s working, and not, in L&D.  Towards Maturity has been collecting benchmarking data on L&D practices for over a decade, and what they find bolsters the case to move L&D forwards.

The report has a number of useful sections, including documenting the current state of the industry, guidance for business leaders on expectations, on listening to learners, and on rethinking  the L&D team.  Included are some top level pointers for executives and L&D.  And while the report is  biased towards Europe, respondents cover the globe including Asia, Americas, and more.

Overall, they’re finding a 19% average in  technology spending out of L&D budgets (and this has been essentially flat for 3 years). This seems light;  given that technology is a key enabler of performance and development, such a figure doesn’t seem appropriate.  Of course, given that 55% of formal learning is still delivered face-to-face, this isn’t surprising.

A more interesting outcome is comparing what they call  Top Deck organizations; those in the top 10% of their Towards Maturity Index. These organizations are characterized by four elements that are tied to success:

  • Learning aligned to need
  • Active learner voice
  • Design beyond the course
  • Proactive in connecting

Here we see key elements of the revolution. For one, learning isn’t done on demand, but is coupled to organizational improvements.  For another, the learner is engaged in the processes of determining what solutions make sense.  One that intrigues me is that the solutions go beyond courses, looking at performance support and more. And finally, L&D is reaching out across silos to engage in conversations.  These are all key to achieving results from 6 – 8 times the average organization.

The advice to business leaders also echoes the revolution. The call is to focus on performance, not on courses.  It’s not about learning, it’s about outcomes.  The recommendation  is to break down silos so as to achieve the conversations that will achieve meaningful impact.

The advice goes on: understand how learners are learning, create a participatory culture, and use  real business metrics.  All grounded in what successful organizations are doing.  The point here is not to recite all the outcomes, but instead to list highlights and encourage you to have a look at the report.  Going forward, you might even consider benchmarking your own organization!

Benchmarking is best practices, and of course I encourage best principles, but the frameworks they use are grounded in the best principles, and measuring yourself against the framework and improving is really more important than comparing yourself to others.  I will suggest that  measuring yourself and evaluating your progress is a valuable investment of time in conjunction with a strategy.

What I really like, of course, is that the data support the position posited by principles that I derived from both practical experience and relevant conceptual models. The evidence is converging that there are positive steps L&D can, and should, take.  The revolution provides the roadmap, and their data provides a way to evaluate progress.  Here’s to improving L&D!

CERTainly room for improvement

24 November 2015 by Clark 3 Comments

As mentioned before, I’ve become a member of my local Community Emergency Response Team (CERT), as in the case of disaster, the official first-responders (police, fire, and paramedics) will be overwhelmed.  And it’s a good group, with a lot of excellent  efforts in processes and tools as well as drills.  Still, of course, there’s  room for improvement.  I encountered one such at our last meeting, and I think it’s an interesting case study.

So one of the things you’re supposed to do in conducting search and rescue is to go from building to building assessing damage and looking for people to help.  And one of the useful things to do is to mark the status of the search and the outcomes, so no one wastes effort on an already explored building. While the marking is  covered in training and there’re support tools to help you remember,  ideally it’d be memorable, so that you  can regenerate the information and  don’t have to look it up.

The design for the marking is pretty clear: you first make a diagonal slash when you start investigating a building, and then you make a crossing slash  when you’ve made your assessment. And  specific information is to be recorded in each quarter of the resulting X: left, right, top, and bottom.  (Note that the US standard set by FEMA doesn’t correspond to the international standard from the  International Search & Rescue Advisory Group, interestingly).

However, when we brought it up in a recent meeting (and they’re very good about revisiting things that quickly fade from memory), it was obvious that most people couldn’t recall what goes where. And when I heard what the standard was, I realized it didn’t have a memorable structure.  So, here are the four things to record:

  • the group who goes in
  • when the group completes
  • what hazards may exist
  • and how many people and what condition they’re in*

So how would  you  map these to the quadrants?  And in one sense it doesn’t matter  if there’s a sensible rationale behind them. One sign that there’s not?  You can’t remember what goes where.

Our  local team leader was able to recall that the order is: left – group, top – completion, right – hazards, and bottom – people.  However, this seems to me to be less than  memorable, so let me explain.

To me, wherever you put the in, left or top, the coming out ought to be opposite. And given our natural flow, group going in makes sense to the left, and coming out ought to go on the right.  In – out.  Then, it’s relatively arbitrary where hazards and people go.  I’d make a case that top-of-mind should be the hazards found to warn others, but that the people are the bottom line (see what I did there?).  I could easily make a case for the reverse, but either would be a mnemonic to support remembering.  Instead, as far as I can tell, it’s completely arbitrary. Now, if it’s not arbitrary and there is a rationale,  it’d help to share  that!

The point being, to help people remember things that are in some sense arbitrary, make a story that makes it memorable. Sure, I can look it up, assuming that the lookup book they handed out stays in the pocket in my special backpack.  (And I’m likely to remember now, because of all this additional processing, but that’s  not what happens in the training.)  However,  making it regenerable from some structure gives you a much better chance of having it to hand. Either a model or a story is better than arbitrary, and one’s possible with a rewrite, but as it is, there’s neither.

So there’s a lesson in design to be had, I reckon, and I hope you’ll put it to use.

* (black or dead, red or needing immediate treatment for life-threatening issues, yellow or needing non-urgent treatment, and green or ok)

When (and not) to crowdsource?

23 November 2015 by Clark 1 Comment

Will Thalheimer commented on my ‘reconciliation‘ post, and pointed out that there are times when you would be better off going to an expert. His apt observation is that there are times when it makes sense to crowdsource and when not to, but it wasn’t clear to him or me when each was. Naturally that led to some reflection, and this is where I ended up.

As a framework, I thought of Dave Snowden’s Cynefin model.  Here, we break situations into one of four types: simple or obvious, where there are known answers; complicated, where it requires known expertise to solve;  complex, where we’re dealing in new areas; and  chaotic, where things are unstable.

With this model, it’s clear that we’ll know what to do in the simple cases, and we should bring in experts to deal with the complicated. For chaotic systems, the proposal is just to do something, to try to move it to one of the other three quadrants!  It’s the other where we might want to consider social approaches.

The interesting place is the complex.  Here, I suggest, is where innovation is needed. This is the domain of trouble-shooting unexpected problems, coming up with new products or services, researching new opportunities, etc.  Here is where you determine experiments to try, and formulate plans to test.  While when the stakes are low you might do it individually, when the stakes are high you bring together a group.  It may be more than one expert, but here’s where you want to use good processes such as brainstorming (done right), etc.

Here is where the elements of the learning organization come in.  Here is where you want to value diversity, be open to new ideas, make it safe to contribute, and provide time for reflection. Here is where you want to tap into collaboration and cooperation. Here is where you want to find ways to get people to work together effectively.

Will was insightful  in pointing out that you don’t always want to tap into the wisdom of the crowd, not least for pragmatics, so we want to be clear about when you do.  My point is that we want to be able to when it makes sense, and facilitate this as part of the new role for L&D in the revolution. So, as this is new to me, let me tap into the power of the crowd here: does this  make sense to you?

Facilitating Knowledge Work #wolweek

18 November 2015 by Clark 2 Comments

In the course of some work with a social business agency, was wondering how to represent the notion of facilitating continual innovation.  This representation emerged from my cogitations, and while it’s not quite right, I thought I’d share it as part of Work Out Loud week.

5RsThe core is the 5 R’s: Researching the opportunities, processing your explorations by either Representing them or putting them into practice (Reify) and Reflecting on those, and then Releasing them.  And of course it’s recursive: this is a  release of my representation of some ideas I’ve been researching, right?    This is very much based on Harold Jarche’s Seek-Sense-Share model for Personal Knowledge Mastery (PKM). I’m trying to be concrete about different types of activities you might do in the Sense section as I think representations  such as diagrams are valuable but very different than active application via prototyping and testing.  (And yes, I’m really stretching to keep the alliteration of the R’s.  I may have to abandon that. ;)

What was interesting to me was to think of the ways in which we can facilitate around those activities.  We shouldn’t assume good research skills, and assist individuals in doing understanding what qualifies as good  searches for input and evaluating the hits, as well as  establishing and filtering existing information streams.

We can and should also  facilitate the representations of interpretations, whether informing properties of good diagrams,  prose, or other representation forms.  We can help make the processes of representation clear as well. Similarly, we can  develop understanding of useful experimentation approaches, and how to evaluate the results.

Finally, we can communicate the outcomes of our reflections, and collaborate on all these activities whether research, representation, reification (that R is a real stretch), and reflection.  As I’m doing here, soliciting feedback.

I do believe there’s a role for L&D to look at these activities as well, and ‘training’ isn’t the solution. Here the role is very much facilitation.   It’s a different skill set, yet a fundamental contribution to the success of the organization. If you believe, like I do, that the increasing rate of change means innovation is the only sustainable differentiator for success, then this role is crucial and it’s one I think L&D has the opportunity to take on.  Ok, those are my thoughts, what are yours?

Reconciling two worlds

17 November 2015 by Clark 8 Comments

A recent post by my colleague in the Internet Time Alliance, Jane Hart, has created quite the stir. In it, she talks about two worlds: an old world and a new world of workplace learning.  And another colleague from the Serious eLearning Manifesto, Will Thalheimer, wrote a rather ‘spirited’ response.  I know, respect, and like both these folks, so I’m wrestling with trying to reconcile these seemingly opposite viewpoints.  I tried  to point out why I think the new perspective makes sense, but I want to go deeper.

Jane was talking about how there’s a split emerging between old-school L&D and new directions.  This is essentially the premise of the Revolution, so I’m sympathetic. She characterized each, admittedly in somewhat stark contrast, representing the past with a straw man portrait  of an industrial era, and a similar  version of a new and modern approach much more flexible and focused on outcomes, not on the learning event.  And I’ve experienced much of the former, and recognize the value of the latter.  It’s of course not quite as cut-and-dried, but Jane was making the case for change and using a stark contrast as a motivator.

Will responded to Jane with some pretty strong language.  He  acknowledged her points in a section where he talks about points of agreement, but then after accusing her of being too broad brush, he commits the same in his section on  Oversimplifications.  Here he  points out extreme views that he implies are the views being painted, but are overly stated as “always” and “never”.

Look, Will fights for the right things when he talks about how formal learning could be better. And Jane does too, when she looks to a more enlightened approach.  So let’s state some more reasonable claims that I hope both can agree with. Here I’m using Will’s ‘oversimplifications’  and infusing them with the viewpoints  I believe in:

  1. Learners increasingly need to take responsibility for their learning,  and we should facilitate and develop it instead of leaving it to chance
  2. Learning can frequently be trimmed (and more frequently needs to change the content/practice ratio), and we should substitute performance support for learning when possible
  3. Much of  training and elearning is boring and we can and should do better making it meaningful
  4. That people can be a great source  of content, but they sometimes  need facilitation
  5. That using some sort of enterprise social platform can be a powerful source for learning, with facilitation and the right culture, but isn’t necessarily a substitute when formal learning is required
  6. That on-the-job learning isn’t necessarily easy to leverage but should be a focus for better outcomes in many cases
  7. Crowds of people  have more wisdom than single individuals,  when you  facilitate the process appropriately
  8. Traditional learning professionals have  an opportunity to contribute to an information age approach, with an awareness of the bigger picture

I do like that Will, at the end, argues that we need to be less divisive and I agree. I think Jane was trying to point in new directions, and I think the evidence is clear that L&D needs to change. I think healthy debate helps, we need to have opinions, even strong ones, hopefully without rancor or aspersions.  I don’t know quite why Jane’s post triggered such a backlash, but I hope we can come together to advance the field.

 

Learning and frameworks

13 November 2015 by Clark 4 Comments

There’s recently been a spate of attacks on 70:20:10 and moving beyond courses, and I have to admit I just don’t get it.  So I thought it’s time to set out why I think these approaches make sense.

Let’s start with what we know about how we learn. Learning is action and reflection.  Instruction (education, training) is designed action and guided reflection.  That’s why, by the way, that information dump and knowledge test isn’t a learning solution.   People need to actively apply the information.

And it can’t follow an ‘event’ model, as learning is spaced out over time. Our brains can only accommodate so much (read: very little) learning at any one time.  There needs to  be ongoing facilitation after  a formal learning experience – coaching over time and stretch assignments – to help cement and accelerate the learning experience.

Now, this can be something L&D does formally, but at some point formal has to let  go (not least for pragmatics) and it becomes the responsibility of the individual  and the community. It shifts from formal coaching to informal mentoring, personal exploration, and feedback from colleagues and fellow practitioners.  It’s impractical for L&D to take on this full responsibility, and instead becomes a role in facilitation of mentoring, communication, and collaboration.

That’s where the 70:20:10 framework comes in.  Leaving that mentoring and collaboration to chance is a mistake, because it’s demonstrably the case that people don’t necessarily have good self-learning skills.  And if we foster self-learning skills, we can accelerate the learning outcomes for the organization. Addressing the skills and culture for learning, personally and collectively, is a valuable contribution that L&D should seize. And it’s not about controlling it all, but making an environment that’s conducive, and facilitating the component skills.

Further, some people  seem to get their knickers in a twist about the numbers, and I’m not sure why that is.  People seem comfortable with the Pareto Principle, for instance (aka the 80/20 rule), and it’s the same. In both cases it’s not the exact numbers that matter, but the concept. For the Pareto Rule it’s recognizing that some large fraction of outcomes  comes from a small fraction of  inputs.  For the 70:20:10 framework, it’s recognizing that much of what you apply as your expertise comes from things other than courses.  And tired old cliches about “wouldn’t want a doctor who didn’t have training” don’t reflect that you’d also not want a doctor who didn’t continue  learning through internships and practice.  It’s not denying the 10, it’s augmenting it.

And this is really what Modern Workplace Learning is about: looking beyond the course.  The course is one important, but ultimately small, piece of being a practitioner, and organizations can no longer afford to ignore the rest of the learning picture.  Of course, there’s also the whole innovation side and performance support when learning doesn’t have to happen  as well, which is  something L&D also should facilitate (cue the L&D Revolution), but getting the learning right by looking at the bigger picture of how we really learn is critical.

I welcome debate on this, but pragmatically if you think about how you  learned what you do, you should  recognize that much of it came from other than courses. Beyond Education, the other two E’s have been characterized as Exposure and Experience. Doing the task in the company of others, socially learning, and by the outcomes of actually applying the knowledge in context, and making mistakes.  That’s real learning, and the recognition that it should not be left to chance is how these frameworks help raise awareness and provide an opportunity for L&D to become more relevant to the organization.  And that, I strongly believe, is a valuable outcome. So, what do you think?

Levels of Design

11 November 2015 by Clark 3 Comments

In a recent conversation, we were talking  about the Kirkpatrick model, and a colleague  had an interesting perspective that hadn‘t really struck me overtly. Kirkpatrick is widely (not widely enough, and wrongly) used as an evaluation tool, but he talked about using it as a design tool, and that perspective made clear for me a problem with our approaches.

So, there‘s a lot of debate about the Kirkpatrick model, whether it helps or hinders the movement towards good learning. I think it‘s misrepresented (including by its own progenitors, though they‘re working on that ;), and while I‘m open to new tools I think it does a nice job of framing a fairly simple but important idea. The goal is to start with the end in mind.

And the evidence is that it‘s not being used well. The largest implementation of the model is level 1, which isn‘t of use (correlation between learner reaction and actual impact is .09, essentially zero with a rounding error). Level 2 drops to a third of orgs, and it drops from there. And this is broken.

The point, and this is emphasized by the ‘design‘ perspective, is that you are supposed to start with level 4, and work back. What‘s the measurable indicator in the organization that isn‘t up to snuff, and what behavior (level 3) would likely impact that? And how do we change that behavior (Level 2)? And here‘s where it can go beyond training: that intervention might be a job aid, or access to a network (which hasn‘t been much in the promotion of the model).

To be fair, the proponents do argue you should be starting at Level 4, but with the numbering (which Don admits he might have got wrong) and the emphasis on evaluation, it doesn‘t hit you up front. Using it as a design tool, however, would emphasize the point.

So here‘s to thinking of learning design as working backwards from a problem, not forwards from a request. And, of course, to better learning design overall.

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.