Learnlets

Secondary

Clark Quinn’s Learnings about Learning

David Eagleman #Trgconf Keynote Mindmap

25 February 2019 by Clark Leave a Comment

David Eagleman gave a humorous and insightful keynote at the Training 19 conference. He helped us see how the unconscious relates to conscious behavior, and how to break out and tap into creativity. Here’s my mindmap:

Mindmap

Surprise, Transformation, & Learning

20 February 2019 by Clark 1 Comment

wrapped presentRecently, I came across an article about a new explanation for behavior, including intelligence. This ‘free energy principle’ claims that entities (including us) “try to minimize the difference between their model of the world and their sense and associated perception”. To put it in other words, we try to avoid surprise.  And we can either act to put the world back in alignment with our perceptions, or we have to learn, to create better predictions.

Now, this fits in  very nicely with the goal I’d been trying to talk about yesterday, generating surprise. Surprise  does seem to be a key to learning! It sounds worth exploring.

The theory is quite deep. So deep, people line up to ask questions of the guy, Karl Friston, behind it!  Not just average people, but top scientists need his help. Because this theory promises to yield answers to AI, mental illness, and more!  Yet, at core, the idea is simply that entities (all the way down, wrapped in Markov blankets, at organ and cell level as well) look to minimize the differences between the world and their understanding. The difference that drives the choice of response (learning or acting) is ‘surprise’.

This correlates nicely with the point I was making about trying to trigger transformative perceptions to drive learning. This suggests that we  should  be looking to create these disturbances in complacency. The valence of these surprises may need to be balanced to the learning goal (transformative experience or transformative learning), but if we can generate an appropriate lack of expectation and outcome, we open the door to learning. People will want to refine their models, to adapt.

Going further, to also make it desirable to learn, the learner action that triggers the mismatch likely should be set in a task that learners viscerally get is important to them.  The suggestion, then, is create a situation where learners want to succeed, but their initial knowledge shows that they can’t. Then they’re ready to learn. And we (generally) know the rest.

It’s nice when an interest in AI coincides with an interest in learning. I’m excited about the potential of trying to build this systematically into design processes. I welcome your thoughts!

Getting brainstorming wrong

12 February 2019 by Clark Leave a Comment

There’s a time when someone takes a result, doesn’t put it into context, and leads you to bad information. And we have to call it out. In this case, someone opined about a common misconception in regards to brainstorming. This person cited a scientific study to buttress an argument about how such a process should go. However, the approach cited in the study was narrower than what brainstorming could and should be. As a consequence, the article gave what I consider to be bad information. And that’s a problem.

Brainstorming

Brainstorming, to be fair, has many interpretations.  The original brought people into a room, had them generate ideas, and evaluate them.  However, as I wrote elsewhere, we now have better models of brainstorming. The most important thing is to get everyone to consider the issue  independently, before sharing. This taps into the benefits of diversity. You should have identified the criteria of the problem to be addressed or outcome you’re looking for.

Then, you share, and still refrain from evaluation, looking for ideas sparked from the combinations of two individual ideas, extending them (even illogically). the goal here is to ensure you explore the full space of possibilities. The point here is to  diverge.

Finally, you get critical and evaluate the ideas. Your goal is to  converge on one or several that you’re going to test. Here, you’re looking to surface the best option under the relevant criteria. You should be testing against the initial criteria.

Bad Advice

So, where did this other article go wrong? The premise what that the idea of ‘no bad ideas’ wasn’t valid. They cited a study where groups were given one of three instructions before addressing a problem: not to criticize, free to debate and criticize, or no instructions.  The groups with instructions did better, but the criticize group were. best.  And that’s ok,  because this wasn’t an  optimal brainstorming design.

What the group with debate and criticizing were actually tasked with doing most of the whole process: freewheeling debate  and evaluation, diverging and converging. The second instruction group was just diverging.  But, if you’re doing it all at once, you’re not getting the benefit of each stage! They were all missing the independent step, the freewheeling didn’t have evaluation, and the combined freewheeling and criticizing group wouldn’t get the best of either.

This simplistic interpretation of the research misses the nuances of brainstorming, and ends up giving bad advice. Ok, if the folks doing the brainstorming in orgs are violating the premise of the stages, it is good advice, but why would you do suboptimal brainstorming?  It might take a tiny bit longer, but it’s not a big issue, and the outputs are likely to be better.

Doing better

We can, and should, recognize the right context to begin with, and interpret research in that context. Taking an under-informed view can lead you to misinterpret research, and consequently lead you to bad prescriptions.  I’m sure this article gave this person and, by association, the patina of knowing what they’re talking about. They’re citing research, after all!  But if you unpack it, the veneer falls off and it’s unhelpful at the core. And it’s important to be able to dig deep enough to really know what’s going on.

I implore you to turn a jaundiced eye to information that doesn’t come from someone with some real time in the trenches. We need good research translators.  I’ve a list of trustworthy sources on the resources page of my book on myths. Tread carefully in the world of self-promoting media, and you’ll be less hampered by the mud ;).

Learning from Experimentation

5 February 2019 by Clark 3 Comments

At the recent LearnTec conference, I was on a panel with my ITA colleagues, Jane Hart, Harold Jarche, and Charles Jennings. We were talking about how to lift the game of Modern Workplace Learning, and each had staked out a position, from human performance consulting to social/informal. Mine (of course :) was at the far end, innovation.  Jane talked about how you had to walk the walk: working out loud, personal learning, coaching, etc.  It triggered a thought for me about innovating, and that meant experimentation. And it also occurred to me that it led to learning as well, and drove you to find new content. Of course I diagrammed the relationship in a quick sketch. I’ve re-rendered it here to talk about how learning from experimentation is also a critical component of workplace learning.

Increasing experimentation and even more learnings based upon contentThe starting point is experimentation.  I put in ‘now’, because that’s of course when you start. Experimentation means deciding to try new things, but not just  any things.  They should be things that would have a likelihood of improving outcomes if they work. The goal is ‘smart’ experiments, ones that are appropriate for the audience, build upon existing work, and are buttressed by principle. They may or may not be things that have worked elsewhere, but if so, they should have good outcomes (or, more unlikely, didn’t but have a environmentally-sound reason to work for you).

Failure  has to be ok.  Some experiments should not work. In fact, a failure rate above zero is important, perhaps as much as 60%!  If you can’t fail, you’re not really experimenting, and the psychological safety isn’t there along with the accountability.  You learn from failures as well as from successes, so it’s important to expect them. In fact, celebrate the lesson learned, regardless of success!

The reflections from this experimentation take some thought as well. You should have designed the experiments to answer a question, and the experimental design should have been appropriate (an A-B study, or comparing to baseline, or…).  Thus, the lesson extracted from learning from experimentation is quickly discerned. You also need to have time to extract the lesson! The learnings here move the organization forward. Experimentation is the bedrock of a learning organization,  if you consolidate the learnings. One of the key elements of Jane’s point, and others, was that you need to develop this practice of experimentation for your team. Then, when understood and underway, you can start expanding. First with willing (ideally, eager) partners, and then more broadly.

Not wanting to minimize, nor overly emphasize, the role of ‘content’, I put it in as well. The point is that in doing the experimentation, you’re likely to be driven to do some research. It could be papers, articles, blog posts, videos, podcasts, webinars, what have you. Your circumstances and interests and… who knows, maybe even courses!  It includes social interactions as well. The point is that it’s part of the learning.

What’s  not in the diagram, but is important, is sharing the learnings. First, of course, is sharing within the organization. You may have a community of practice or a mailing list that is appropriate.  That builds the culture. After that, there’s beyond the org.  If they’re proprietary, naturally you can’t. However, consider sharing an anonymized version in a local chapter meeting and/or if it’s significant enough or you get good enough feedback, go out to the field. Present at a conference, for instance!

Experimentation is critical to innovation. And innovation takes a learning organization. This includes a culture where mistakes are expected, there’s time for reflection, practices for experimentation are developed, and more.  Yet the benefits to create an agile organization are essential.  Experimentation needs to be part of your toolkit.  So get to it!

 

The wisdom of instruction

29 January 2019 by Clark Leave a Comment

I was listening in to a webinar on trends in higher education. The speakers had been looking at different higher ed pedagogy models, within and external to institutions. It became clear that there was a significant gap between a focus on meeting corporate needs and the original goals of education. Naturally, it got me to think, and one link was, not surprisingly, wisdom. So what does that mean?

In the ‘code academy’ models that are currently challenging to higher education, there’s very much a ‘career’ focus. That is, they’re equipping students to be ready to take jobs.  Which is understandable, but there’s a gap. A not-for-profit initiative I was involved with wanted to get folks a meaningful job. My point was that I didn’t want them to get a job, I wanted them to  keep a job!  And that means also learning about learning to learn skills, and more. That more is where we make a substantial shift.

The shift I want to think about is not just what corporations need, but what  society needs. The original role of institutions like Oxford and Harvard was to create the next generation leaders of society. That is, to give the philosophical (in the broad sense) and historical perspective to let them do thinking like what delivered the US Constitution (as an example). And there’s plenty of lip service to this, but little impact. For example, look at the success of teaching ethics separately from other business classes…let’s move on.

It seems like there’s several things we need to integrate. As pointed out, treating them separately doesn’t work. So how do we integrate them and make them relevant.  Let’s take Sternberg’s model of Wisdom, where you think about decisions:

  • for the short term  and long term
  • for you, yours,  and society as a whole
  • and also explicitly discuss the value assumptions underpinning the decision.

This gives us a handle. We need to find ways to naturally embed these elements into our tasks. Our tasks need to require 21C skills and understanding the societal context as well.

In my ‘application-based instruction’ model, I talk about giving learners challenges that do require 21 C skills in natural ways. In this model, tasks mimic world tasks, asking for things like presentations, RFPs, problem recommendations, and more.  Then, how do we also include the societal aspects?  I suppose by putting those decisions in situations where there are implications not just for the business but for society.

Ok, it may be too much to layer this on every assignment (major assignment, not the accompanying knowledge check), but it should be covered in every subject (yes, even introductory) in some way. This thinking has already led me to create a question on evaluating policy tradeoffs for the mobile course I’m developing.

We need to keep the societal implications involved. Ensuring that at least a subset of the assignments do that is one approach. Doing so in a natural way requires some extra thinking, but the consequences are better. Particularly if the instructor actually makes a point of it (making a note to myself…).  A separate course doesn’t do it. So let’s get wise, and develop in deeper ways that will deliver better outcomes  in the domain, and for the greater good. Shall we?

Locus of learning: community, AI, or org?

15 January 2019 by Clark Leave a Comment

A recent article caused me to think. Always a great thing!  It led to some reflections that I want to share. The article is about a (hypothetical) learning journey, and talks about how learning objects are part of that learning process. My issue is with the locus of the curation of those objects; should it be the organization, an AI, or the community?  I think it’s worth exploring.

The first sentence that stood out for me made a strong statement. “Choice is most productive when it is scaffolded by an organizationally-curated framework.” Curation of resources for quality and relevance is a good thing, but is the organization is the best arbiter? I’ve argued that the community of practice should determine the curriculum to be a member of that community. Similarly, the resources to support progression in the community should come from the community, both within  and outside the organization.

Relatedly, the sentence before this one states “learner choice can be a dangerous thing if left unchecked”.  And this really strikes me as the wrong model.  It’s inherently saying we don’t trust our learners to be good at learning.  I don’t  expect  learners (or SMEs for that matter) to know learning. But then, we shouldn’t leave that to chance. We should be facilitating the development of learning to learn skills explicitly, having L&D model and guide it, and more.  It’s rather an  old school approach to think that the org (through the agency of L&D) needs to control the learning.

A second line that caught my eye was that the protagonist “and his colleagues  create and share additional AI-curated briefings with each other.”  Is that AI curation, or community curation? And note that there’s ‘creation’, not just sharing.  I’m thinking that the human agency is more critical than the AI curation. AI curation has gotten good, but when a community is working, the collective intelligence is better. Or, if we’re talking IA (and we should be), we should explicitly looking to couple AI and community curation.

Another line is also curious.  “However, learning leaders must balance the popularity of informal learning with the formal, centralized needs of the organization. This can be achieved using AI-curated real-time briefings.” Count me skeptical. I believe that if you address the important issues – purpose via meaningful work and autonomy to pursue, communities of practice, and learning to learn skills – you can trust informal learning more than AI or a central view of what learning can and should be.

Most of the article was quite good, even if things like “psychological safety” are being attributed to McKenzie instead of Amy Edmondson.  I like folks looking to the future, and I understand that aligning with the status quo is a good business move. It’s just that when you get disconnects such as these, it’s an opportunity to reflect.  And wondering about the locus of responsibility for learning is a valuable exercise.  Can the locus be the individual and community, not the org or AI? Of course, better yet if we get the synergy between them.  But let’s think seriously about how to empower learners and community, ok?

 

A foolish inconsistency

8 January 2019 by Clark Leave a Comment

Here, a foolish inconsistency is the hobgoblin of my little mind. While there are some learnings in here (for me and others), it’s really just getting stuff off my chest. Feel free to move along. This is just a lack of consistency that I suggest is unnecessary and ill-conceived.

I’ve hinted at this before, but I don’t think I’ve gone into detail. I like LinkedIn. It’s a useful augment for business networking. However, what drives me nuts is the inconsistency between the device app and the web interface.  One instance is sufficient: messaging someone you’ve just connected to.

So, on the device, if you link to someone, you immediately get a notice and a link to send them a message. And I like that, since I like to send a quick followup to everyone I link to (a trick I learned from a colleague).  On the device, it goes straight to the messaging interface. Perfect. Now, from the invitations on the app that I want to query (e.g. it’s not clear why they’ve linked) or to explain why I won’t (I generally  don’t link to orgs, for instance), I can’t do that, but that’s ok, it can wait ’til I’m on my laptop using the (richer) web app.

On the web version, when I accept a link, I’m also offered the chance to message them, but here’s the trick: it’s not a message, it’s an InMail!  And, of course, those are limited. I don’t want to use my InMails on messaging someone I’m already linked to.  (I don’t use them in general, but that’s a separate issue.).  WHY can’t they go to messages like the app?  That’d be consistent, and this is a worse default than using messages.  I get that the app would have more limited functionality in return for being an app (there’re benefits, like notifications), but why would the full web version do things that are contrary to your interests  and intentions?!?!

Good design says consistency  is a good thing, generally; certainly aligning with user expectations and best interests. It’s bad design to do something that’s unnecessarily wasteful.  There are lots of such irritations: web forms that only tell you the expected format  after you get it wrong instead of making it easy to point to the answer  or give you a clue and sites with mismatched security (overly complex for unessential data or vice-versa) are just two examples.  This one, however, continues to be in my face regularly.

This inconsistency is instead a hobgoblin of a sensible mind. Has this irritated you, or what other silly  designs bedevil you?

 

The pain of learning

27 December 2018 by Clark Leave a Comment

My dad, in his last years, lost the use of his hands and most of his hearing. It seemed like he then gave up. I finally challenged him on it, and he said “when you’re in constant pain…”.  And I got it.

So, turns out I’ve a misbehaving disk in my back, and it started pressing on the nerve over the summer. Pain scales are 1-10; this ultimately got to an 8 when I was trying to walk or even stand (from my lower back down my leg to my toes). Tried physio, non-steroidal anti-inflammatories, and then a steroid pack; nope. The ‘big hammer’ option was a cortisone injection, and that happened. Better yet, it knocked it back; down to 1). Er, for some six – eight weeks, then it came back. They gave me another one sooner than they were supposed to, but it hasn’t worked (ok, it’s knocked it to a 6 on average, but…this isn’t tolerable).  And my point here isn’t that I’m looking for sympathy, but to (of course) talk about the learnings. Because, despite the physical pain, there are learnings (good and bad).

Because there’re a physiological basis (pressing on the nerve), I’ve stuck with treatments likely to minimize the inflammation. I haven’t looked at a chiropractor nor acupuncture. Given that the current approaches are failing, those may come up, though I’m expecting surgery as the nuclear option. Not that I’m eager (to the contrary!). One learning is how close minded I can be about exploring alternative solutions. On the other hand, as it shoots down the leg into my foot, I’ve learned a lot more about physiology!

In the course of navigating airports and the like while in the throws of this (long story), I  also  found that the milk of human kindness can be diluted by pain. When you’re muttering obscenities under your breath because of the knives that accompany every step, clueless actions on the part of others – like stopping suddenly, blocking access, or even just bad signage – can earn muffled imprecations and aspersions on parentage and intelligence.  I’ve always tried to maintain ‘situational awareness’ (and know I’ve failed at times), but I highly recommend it!

On the other hand, when sitting (the only time it settles down), I’m expanding on my growing recognition over the past years that I have no idea what anyone else may be going through.  I’m sure my limping through parking lots and stores can be perceived as congenital damage or wear and tear. There’s no real way for anyone to know how much someone else hurts. We don’t have meters over our heads or icons.

And I’m increasingly grateful!  That may sound odd, but this experience is teaching me (and I am trying to find the positive).  Finding ways to minimize it is an ongoing experimentation. The support of my family helps, and I’ve learned (some) to ask for help.  But even an involuntary and undesirably challenging experience still is an experience.

Also, as much as it may be hard to struggle to find time and motivation for exercise, you learn to miss it. It seems every time I start taking a serious stab at diet and exercise, something goes wrong!  It’s almost like I’m not supposed to; and I know that’s wrong.  (I’ve also learned to secretly suspect my pain doctor is a closet sadist, but that’s the pain talking. :)

This is definitely  not ‘hard fun‘, to be clear. This is much more lemonade.  Fingers crossed that this, too, will pass. And if you do see me limping around, cut me some slack ;).  But also, please understand that it’s hard to know what other people are going through, and do your best to be sympathetic. Which seems like the right message for this time of year anyway. Wishing you and yours all the best for the holidays and the new year!

The case for PKM

20 December 2018 by Clark 6 Comments

Seek > Sense > ShareApparently, an acquaintance challenged my colleague Harold Jarche’s Personal Knowledge Mastery (PKM)  model.  He seemed to consider the possibility that it’s a fad. Well, I do argue people should be cautious about claims. So, I’ve talked about PKM before, but I want to elaborate. Here’s my take on the case for PKM.

As context, I think meta-learning, learning to learn, is an important suite of skills to master. As things change faster, with more uncertainty and ambiguity, the ability to continually learn will be a critical differentiator. And you can’t take these skills for granted; they’re not necessarily optimal, and our education systems generally aren’t doing a good job of developing them. (Most of school practices are antithetical to self learning!)

Information is key.  To learn, you need access to it, and the chance to apply. Learning on one’s own is about recognizing a knowledge gap, looking for relevant information, applying what you find to see if it works, and once it does, to consolidate the learning.

Looking at how you deal with information – how you acquire it, how you process it, and how you share your learnings – is an opportunity to reflect. Think of it as double-loop learning, applying your learning to your own learning. We’re often no so meta-reflective, yet that ends up being a critical component to improving.

Having a framework to scaffold this reflection is a great support for improving. Then the question becomes what is the right or best support?  There are lots of people who talk about bits and pieces, but what Harold’s done is synthesize them into a coherent whole (not a ‘mashup’). PKM integrates different frameworks, and creates a practical approach.  It is simple, yet unpacks elegantly.

So what’s the evidence that it’s good?  That’s hard to test.  The acquaintance was right that just university uptake wasn’t a solid basis (I found a renowned MBA program recently that was still touting MBTI!).  The hard part would be to create a systematic test. Ideally, you’d find an organization that implements it, and documents the increase in learning. However, learning in that sense is hard to measure, because it’s personal. You might look for an increase in aggregate measures (more ideas, faster trouble-shooting), but this is personal  and is dependent on outside factors like the  culture for learning.

When you don’t have such data, you have to look for some triangulating evidence. The fact that multiple university scholars are promoting it isn’t a bad thing. To the contrary, uptake at individual institutions without a corporate marketing program is actually quite the accolade!  The fact that the workshop attendees tout it personally valuable it also a benefit. While we know that individual attendee’s reports on the outcomes of a workshop don’t highly correlate with actual impact, that’s not true for people with more expertise. And the continued reflection of value is positive.

Finally, a point I made at the end of my aforementioned previous reflection is relevant. I said: “I realize mine is done on sort of a first-principles basis from a cognitive perspective, while his is richer, being grounded in others‘ frameworks.”  Plus, he’s been improving it over the years, practicing what he preaches. My point, however, is that it’s nicely aligned with what you’d come at from a cognitive perspective. Without empirical data, theoretical justification combined with scholarly recognition and personal affirmations are a pretty good foundation.

There’re meta-lessons here as well: how to evaluate programs, and the value of meta-learning. These are worth considering. Note that Harold doesn’t need my support, and he didn’t ask me to do this. As usual, my posts are triggered by what crosses my (admittedly febrile) imagination. This just seemed worth reflecting on. So, reflections on your part?

Experimentation specifics

5 December 2018 by Clark Leave a Comment

I’m obviously a fan of innovation, and experimentation is a big component of innovation. However, I fear I haven’t really talked about the specifics.  The details matter, because there are smart, and silly, ways to experiment. I thought I’d take a stab at laying out the specifics of experimentation.

First, you have to know what question you’re trying to answer. Should we use a comic or a video for this example?  Should we use the content management system or our portal tool to host our learning and performance support resources?  What’s the best mechanism for spacing out learning?

An important accompanying question is “how will we know what the answer is?”  What data will discriminate?  You need to be looking for a way to tell, we know, we can’t know, or we need to revise and do again.

Another way to think about this is: “what will we do differently if we find this?” and “what will we do differently if it turns out differently?” The point is to know not just what you’ll know, but  what it means.

You want to avoid random experimentation. There  are the ‘lets try it out’ pilots that are exploratory, but you still want to know what question your answering. Is it “what does it take to do VR” or “let’s try using our social media platform to ‘show our work'”.

Then you need to design the experiment. What’s the scope? How will you run it? How will you collect data? Who are your subjects?  How will you control for problems?

One of the claims has regularly been “don’t collect any data you don’t know what you’ll do with”.  These days, you can run exploratory data analysis, but still, accumulating unused data may be a mistake.

The after-experiment steps are also important. Major questions include: “what did we learn”, “do we trust the results”, and “what will we do as a result”. Then you can followup with the actions you determined up front that would be predicated on the outcomes you discover.

Experimentation is a necessary component of growth. You have to have a mindset that you learn from the experiment, regardless of outcome. You should have a budget for experimentation  and expect a degree of failure. It’s ok to lose, if you don’t lose the lesson!  And share your learnings so others don’t have to make the same experiment.  So experiment, just like I did here; is this helpful?  If not, what would it need to be useful?

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.