Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Violating Expectations

4 April 2019 by Clark Leave a Comment

As some of you may know, last week I had a surgical procedure. I don’t want to share details, but while it was non-trivial, it went fine. (I’ve talked earlier about the situation beforehand.) What did  not go fine was the recovery. I’m good now, but there were a harrowing few days. And I think the reason is of interest, and there’s a lesson. So I thought I’d share.

Now, my only previous experience with surgery was outpatient knee surgery. And it was amazing; I went off pain killers the 2nd day, and recovery was rapid. This, too, was outpatient, and while not as ‘micro’ as the knee surgery, I had no other frame of reference. And that caused a problem.

So, the day of the surgery went about as you might expect. I went in, lay down, woke up somewhere else, and was told things went well. With the benefit of meds, I let folks know I’d lived :), and proceeded to sleep away the afternoon. Come the evening, I was more clear-headed. With good meds, I looked forward to a night’s sleep, and better in the morn.

That night’s sleep was  not good. I couldn’t get comfortable, and so couldn’t sleep. Specifically, my left (not bad) side was uncomfortable and so was my right side  that was supposedly fixed. I was awake all but maybe 2 hours. Yet, I’d gotten used to sleep deprivation.

I was bothered that the side that had had the surgery, that was now supposed to be free of the cause, still hurt. Differently, perhaps, but still hurt. This was dismaying (to put it mildly). I called Saturday night, and was told that the right side could still hurt for some days. Er, okay…

Saturday night wasn’t really better. I slept maybe 3-4 hours, but lack of comfort meant I was still worried and in addition now sleep-deprived. This wasn’t all, but worth recounting is by Sunday, my whole right leg wasn’t working. Any sort of moving but standing hurt.  Not good.

This continued into Monday. Little and bad sleep meant I was going into a mentally challenging state of sleep-deprivation. The lack of right leg action began to make me feel like the whole experiment had failed, and I was going to have to face this again. I put in a call, but my doctor was in surgery. You can imagine I was discouraged and distressed. I was headed to bed when my doctor finally called. They’d scheduled me to see him the next day, and I hung on to that.

When I got to see him on Tuesday morning, I was a wreck. Spaced out from lack of sleep, distressed about my leg, and so on. I first talked to the PA, and then the doc came in. And, I found out a lot more. They’d completely removed the material pressing on the disk, but in so doing they’d likely irritated the nerve. And there could be some bleeding doing that, too. So, my leg hurting was explained. When we talked about meds, they were reminded what I  had  been on, and how the sudden cessation of that could be problematic. With explanations, and revised recommendations for medications going forward, it seemed promising.

Low and behold, after the visit, things began to fall into place. The medication revisions kicked in, and I felt a  lot better. Not good, mind you, but many times better. Finally, I could see how this was all working, and I  was progressing!

I awoke this morning and verified that yesterday wasn’t a fluke; I’m on a path to recovery. I still have a backlog of things to deal with, but I can get on that now.  And I still hurt in various places. But it’s the right hurt, I now know.

The short version of all this is that expectations matter. Stephanie Burns did her Ph.D. research on the people who succeeded with their goals (vs those who don’t), and found it was the ones who managed their expectations appropriately. Set goals, rewarded them, realized it was a long haul, etc. Similarly for learning; you want expectations to match experience. A mismatch can induce barriers to successful learning. If the experience will be typical, it may not matter so much, but you want to be wary of any ways in which people can find their expectations mismatched. Yes, you want some surprise, but you don’t want people to lose their comprehension of who they are and where they’re going.

I don’t actually blame the doctor. I think they could’ve set my expectations better, but I fear I come across as someone who has an idea of what’s going on. And I should’ve asked more questions. Further, I think there weren’t any flags that I needed such support. Still, it perhaps ought to be automatic. So consider setting expectations. Deliberately. Systematically. You can let them know there might be some surprise, without giving it away. Don’t leave people open to making inappropriate expectations, or you might be unpleasantly surprised.

Sarah Prevette #LSCon Keynote Mindmap

28 March 2019 by Clark Leave a Comment

Sarah Prevette closed the Learning Solutions conference with a rapid fire overview of Design Thinking and a passionate case for making the success skills of the future to be entrepreneurship. Starting with her experiences, she laid out success factors, and suggested that these skills were learnable and should be the curriculum.

Curriculum or pedagogy?

12 March 2019 by Clark Leave a Comment

In a conversation today, I mentioned that previously I’ve thought that perhaps the best next ‘man in the moon’ project would be to put an entire K12 curriculum up online. And, I’ve also thought that the only way to really fix things is to train trainers of teachers to learn to facilitate learning  around meaningful activity. And, of course, both are needed. What am I thinking?

So, there are huge gaps in the ways in which folks have access to learning. For example, I worked on a project that was trying to develop some K12 curricula online, to provide support for learners in HS that might not have sufficiently capable learners. The project had started with advanced learners, but recognized that wasn’t the only gap. And this is in California!  So I have argued for a massive project, but using advanced curricula and pedagogy.

And, at the other end, as I spoke at a conference looking to talk about improving education in India. There, they have a much bigger need for good teachers than they can reach with their education schools. I was arguing for a viral teacher prep. The idea being not just to train teachers, but train the trainers of those teachers. Then the training could go viral, as just teaching teachers wouldn’t go fast enough.

And both are right, and not enough. In the conversation, I resurrected both points and am now reflecting how they interact. The simple fact is that we need a better curriculum and a better pedagogy. As Roger Schank rightly points out, things like the quadratic equation are nuts to keep in a K12 curricula. The fact is that our curricula came from before the  Industrial Age and is barely adequate there. Yet we’re in an Information Age. And our pedagogy is aligned to tests, not to learning nor doing. We should be equipping kids with actionable knowledge to make meaningful decisions in their lives, not with arbitrary and abstract knowledge that isn’t likely to transfer.

And, of course, even if we did have such a curriculum online, we’d need teachers who could facilitate learning in this way. And that’s a barrier not just in India. The point being that most of the world is suffering with bad curricula and pedagogy. How do we make this change.

And I don’t have an answer. I think we should put both online, and support on the ground. We need that content, available through mobile to reach beyond the developed world, and we need the facilitators. They can be online, as I think about it, but they need to understand the context on the ground if they’re not there. They are context-specific necessities. And this is a massive problem.

Principle says: start small and scale. There are institutions doing at least parts of this, but scaling is a barrier. And again, I have no immediate solution other than a national (or international) initiative. We don’t want just one without the other. I don’t want teachers facilitating the old failed curricula, and I don’t want current pedagogies working on the new curricula. (And I shudder at the thought of a pre-college test in the old style trying to assess this new model!) I welcome your thoughts!

Thoughts on strategy from Training 19

6 March 2019 by Clark Leave a Comment

So last week I was the strategy track coach for the Training 19 conference. An experiment! That meant that I picked the sessions from a list of those who put their session proposals up for ‘strategy’, and could choose to open and/or close the track. I chose both. And there were thoughts on strategy from the sessions and the attendees that are worth sharing.

I chose the sessions mainly on two criteria: coverage of the topics, and sessions that sounded like they’d give real value.  I was lucky, the latter happened! While I didn’t get the complete coverage I wanted, I  did get a good spread of topics. So I think the track worked. As to the coaching, there wasn’t much of that, but I’ve sent in suggestions for whoever does it next year.

I knew two of the presenters, and some were new. My goal, again, was real coverage. And they lived up to it. Friend and colleague Michael Allen practiced what he preached while talking about good learning design, as he does. He was followed by Karen Polhemus &amp Stephanie Gosteli who told a compelling tale of how they were managing a  huge initiative by combining training with change management. Next was JD Dillon, another friend, talked about his experiences building learning ecosystems that deemphasized courses based upon data and his inferences. Alwyn Klein made an enthusiastic and compelling case for doing performance consulting  before you start.  Haley Harris & Beth Wisch went deep about data in talking about how they met the needs for content by curating.  Joe Totherow talked games as a powerful learning tool. Finally, Alex Kinnebrew pushed for finding stakeholder voices as a complement to data in making strategy.

Performance EcosystemI bookended these talks. I opened by making the case for doing optimal execution right, meaning doing proper learning design and performance support. Then I talked about driving for continual innovation with social and informal.  I closed by laying out the performance ecosystem diagram (ok, so I replaced ‘elearning’ in the diagram with ‘training’, and that’s probably something I keep), and placed the coming talks on it, so that attendees would know where the talks fit. I mostly got it right ;).  However, the feedback suggested that for those who complained, it’s because  I took too long to get to the overview. Useful feedback.  

I finished with a 3 hour strategy session where I walked people through each element of the ecosystem (as I cut it), giving them examples, providing self-assessment, and items to add to their strategy for that element. I closed by suggesting that it was up to them to sequence, based upon their particular context. Apparently, people  really liked this opportunity. One challenge was the short amount of time; this is usually run as a full day workshop.

It’s clear that folks are moving to thinking ‘outside of the box’, and I’m thrilled. There were good audiences for the talks in a conference focused on doing training! It’s definitely time for thoughts on strategy. Perhaps, as has happened before, I was ahead of the time for the revolution. Here’s to a growing trend!

Surprise, Transformation, & Learning

20 February 2019 by Clark 1 Comment

wrapped presentRecently, I came across an article about a new explanation for behavior, including intelligence. This ‘free energy principle’ claims that entities (including us) “try to minimize the difference between their model of the world and their sense and associated perception”. To put it in other words, we try to avoid surprise.  And we can either act to put the world back in alignment with our perceptions, or we have to learn, to create better predictions.

Now, this fits in  very nicely with the goal I’d been trying to talk about yesterday, generating surprise. Surprise  does seem to be a key to learning! It sounds worth exploring.

The theory is quite deep. So deep, people line up to ask questions of the guy, Karl Friston, behind it!  Not just average people, but top scientists need his help. Because this theory promises to yield answers to AI, mental illness, and more!  Yet, at core, the idea is simply that entities (all the way down, wrapped in Markov blankets, at organ and cell level as well) look to minimize the differences between the world and their understanding. The difference that drives the choice of response (learning or acting) is ‘surprise’.

This correlates nicely with the point I was making about trying to trigger transformative perceptions to drive learning. This suggests that we  should  be looking to create these disturbances in complacency. The valence of these surprises may need to be balanced to the learning goal (transformative experience or transformative learning), but if we can generate an appropriate lack of expectation and outcome, we open the door to learning. People will want to refine their models, to adapt.

Going further, to also make it desirable to learn, the learner action that triggers the mismatch likely should be set in a task that learners viscerally get is important to them.  The suggestion, then, is create a situation where learners want to succeed, but their initial knowledge shows that they can’t. Then they’re ready to learn. And we (generally) know the rest.

It’s nice when an interest in AI coincides with an interest in learning. I’m excited about the potential of trying to build this systematically into design processes. I welcome your thoughts!

Transformative Learning & Transformative Experiences

19 February 2019 by Clark 2 Comments

In my quest to not just talk about transformation but find a way to go beyond just experience, I did some research. I came across a mention of transformative experiences. And that, in turn, led me to transformation learning. And the distinction between them started me down a path that’s still evolving. Practicing what I preach, here’s how my thinking’s developing.

I’ll start with the reverse, transformative learning, because it came first  and it’s at the large end.  Mezirow was the originator of  Transformative Learning  Theory. It’s addressing  big learnings, those that come about from a “disorienting dilemma”. These are life-changing events. And we do want to be able to accommodate this as well, but we might also need something more, er scalable.  (Do we really want to ruin someone’s life for the purpose of our learning goals?:) So, what’s at core? It’s about a radical reorientation. It’s about being triggered to change your worldview. Is there something that we can adapt?

The author of the paper pointed me to her co-author, who unveiled a suite of work around Transformative Experience Theory. These are smaller experiences.  In one article, they cite the difference between transformative learning and transformative experiences, characterizing the latter as “smaller shifts in perspective tied to the learning of particular content ideas”.  That is, scaling transformative learning down to practical use, in their case for schools. This sounds like it’s more likely to have traction for day to day work.

The core of transformative experience, however, is more oriented towards the classroom and not the workplace. To quote: “Transformative experiences occur when students take ideas outside the classroom and use them to see and experience the world in exciting new ways.”  All well and good, and we  do want our learners to perceive the workplace in new ways, but it’s not just presenting ideas and facilitating the slow acquisition. We need to find a handle to do this reliably and quickly.

My initial thought is about ‘surprise’. Can we do less than trigger a life-changing event, but provide some mismatch between what learners expect and what occurs to open their eyes?  Can we do that systematically; reliably, and repeatedly?  That’s where my thinking’s going: about ensuring there’s a mismatch because that’s the teachable moment.

Can we do small scale violations of expectations that will trigger a recognition of the need for (and willingness to accomplish) learning?  My intuition says we can. What say you?  Stay tuned!

Getting brainstorming wrong

12 February 2019 by Clark Leave a Comment

There’s a time when someone takes a result, doesn’t put it into context, and leads you to bad information. And we have to call it out. In this case, someone opined about a common misconception in regards to brainstorming. This person cited a scientific study to buttress an argument about how such a process should go. However, the approach cited in the study was narrower than what brainstorming could and should be. As a consequence, the article gave what I consider to be bad information. And that’s a problem.

Brainstorming

Brainstorming, to be fair, has many interpretations.  The original brought people into a room, had them generate ideas, and evaluate them.  However, as I wrote elsewhere, we now have better models of brainstorming. The most important thing is to get everyone to consider the issue  independently, before sharing. This taps into the benefits of diversity. You should have identified the criteria of the problem to be addressed or outcome you’re looking for.

Then, you share, and still refrain from evaluation, looking for ideas sparked from the combinations of two individual ideas, extending them (even illogically). the goal here is to ensure you explore the full space of possibilities. The point here is to  diverge.

Finally, you get critical and evaluate the ideas. Your goal is to  converge on one or several that you’re going to test. Here, you’re looking to surface the best option under the relevant criteria. You should be testing against the initial criteria.

Bad Advice

So, where did this other article go wrong? The premise what that the idea of ‘no bad ideas’ wasn’t valid. They cited a study where groups were given one of three instructions before addressing a problem: not to criticize, free to debate and criticize, or no instructions.  The groups with instructions did better, but the criticize group were. best.  And that’s ok,  because this wasn’t an  optimal brainstorming design.

What the group with debate and criticizing were actually tasked with doing most of the whole process: freewheeling debate  and evaluation, diverging and converging. The second instruction group was just diverging.  But, if you’re doing it all at once, you’re not getting the benefit of each stage! They were all missing the independent step, the freewheeling didn’t have evaluation, and the combined freewheeling and criticizing group wouldn’t get the best of either.

This simplistic interpretation of the research misses the nuances of brainstorming, and ends up giving bad advice. Ok, if the folks doing the brainstorming in orgs are violating the premise of the stages, it is good advice, but why would you do suboptimal brainstorming?  It might take a tiny bit longer, but it’s not a big issue, and the outputs are likely to be better.

Doing better

We can, and should, recognize the right context to begin with, and interpret research in that context. Taking an under-informed view can lead you to misinterpret research, and consequently lead you to bad prescriptions.  I’m sure this article gave this person and, by association, the patina of knowing what they’re talking about. They’re citing research, after all!  But if you unpack it, the veneer falls off and it’s unhelpful at the core. And it’s important to be able to dig deep enough to really know what’s going on.

I implore you to turn a jaundiced eye to information that doesn’t come from someone with some real time in the trenches. We need good research translators.  I’ve a list of trustworthy sources on the resources page of my book on myths. Tread carefully in the world of self-promoting media, and you’ll be less hampered by the mud ;).

The wisdom of instruction

29 January 2019 by Clark Leave a Comment

I was listening in to a webinar on trends in higher education. The speakers had been looking at different higher ed pedagogy models, within and external to institutions. It became clear that there was a significant gap between a focus on meeting corporate needs and the original goals of education. Naturally, it got me to think, and one link was, not surprisingly, wisdom. So what does that mean?

In the ‘code academy’ models that are currently challenging to higher education, there’s very much a ‘career’ focus. That is, they’re equipping students to be ready to take jobs.  Which is understandable, but there’s a gap. A not-for-profit initiative I was involved with wanted to get folks a meaningful job. My point was that I didn’t want them to get a job, I wanted them to  keep a job!  And that means also learning about learning to learn skills, and more. That more is where we make a substantial shift.

The shift I want to think about is not just what corporations need, but what  society needs. The original role of institutions like Oxford and Harvard was to create the next generation leaders of society. That is, to give the philosophical (in the broad sense) and historical perspective to let them do thinking like what delivered the US Constitution (as an example). And there’s plenty of lip service to this, but little impact. For example, look at the success of teaching ethics separately from other business classes…let’s move on.

It seems like there’s several things we need to integrate. As pointed out, treating them separately doesn’t work. So how do we integrate them and make them relevant.  Let’s take Sternberg’s model of Wisdom, where you think about decisions:

  • for the short term  and long term
  • for you, yours,  and society as a whole
  • and also explicitly discuss the value assumptions underpinning the decision.

This gives us a handle. We need to find ways to naturally embed these elements into our tasks. Our tasks need to require 21C skills and understanding the societal context as well.

In my ‘application-based instruction’ model, I talk about giving learners challenges that do require 21 C skills in natural ways. In this model, tasks mimic world tasks, asking for things like presentations, RFPs, problem recommendations, and more.  Then, how do we also include the societal aspects?  I suppose by putting those decisions in situations where there are implications not just for the business but for society.

Ok, it may be too much to layer this on every assignment (major assignment, not the accompanying knowledge check), but it should be covered in every subject (yes, even introductory) in some way. This thinking has already led me to create a question on evaluating policy tradeoffs for the mobile course I’m developing.

We need to keep the societal implications involved. Ensuring that at least a subset of the assignments do that is one approach. Doing so in a natural way requires some extra thinking, but the consequences are better. Particularly if the instructor actually makes a point of it (making a note to myself…).  A separate course doesn’t do it. So let’s get wise, and develop in deeper ways that will deliver better outcomes  in the domain, and for the greater good. Shall we?

What to evaluate?

22 January 2019 by Clark 4 Comments

In a couple of articles, the notion that we should be measuring our impact on the business is called out. And being one who says just that, I feel obligated to respond.  So let’s get clear on what I’m saying and why.  It’s about what to evaluate, why, and possibly when.

So, in the original article, by my colleague Will Thalheimer, he calls the claim that we should focus on business impact ‘dangerous’!  To be fair (I know Will, and we had a comment exchange), he’s saying that there are important metrics we should be paying attention to about what we do and how we do it. And no argument!  Of course we have to be professional in what we do.  The claim isn’t that the business measure is  all we need to pay attention to. And he acknowledges that later. Further, he does say we need to avoid what he calls ‘vanity metrics’, just how efficient we are. And I think we  do need to look at efficiency, but only after we know we’re doing something worthwhile.

The second article is a bit more off kilter. It seems to ignore the value of business metrics all together. It talks about competencies and audience, but not impacting the business. Again, the author raises the importance of being professional, but still seems to be in the ‘if we do good design, it is good’, without seeming to even check to see if the design is addressing something real.

Why does this matter?  Partly because, empirically, what the profession measures are what Will called ‘vanity’ measures. I put it another way: they’re efficiency metrics. How much per seat per hour? How many people are served per L&D employee?  And what do we compare these to?  Industry benchmarks. And I’m not saying these aren’t important, ultimately. Yes, we should be frugal with our resources. We even should ultimately ensure that the cost to improve isn’t more than the problem costs!  But…

The big problem is that we’ve no idea if that butt in that seat for that hour is doing any good for the org.  We don’t know if the competency is a gap that means the org isn’t succeeding!  I’m saying we need to focus on the business imperatives because we  aren’t!

And then, yes, let’s focus on whether our learning interventions are good. Do we have the best practice, the least amount of content and it’s good, etc. Then we can ask if we’re efficient. But if we only measure efficiency, we end up taking PDFs and PPTs and throwing them up on the screen. If we’re lucky, with a quiz. And this is  not going to have an impact.

So I’m advocating the focus on business metrics because that’s part of a performance consulting process to create meaningful impacts. Not in lieu of the stuff Will and the other author are advocating, but in addition. It’s all too easy to worry about good design, and miss that there’s no meaningful impact.

Our business partners will not be impressed if we’re designing efficient, and even effective learning, if it isn’t doing  anything.  Our solutions need to be  targeted at a real problem and address it. That’s why I’ll continue to say things like “As a discipline, we must look at the metrics that really matter… not to us but to the business we serve.”  Then we also need to be professional. Will’s right that we don’t do enough to assure our effectiveness, and only focus on efficiency. But it takes it all, impact + effectiveness + efficiency, and I think it’s dangerous to say otherwise.  So what say you?

Redesigning Learning Design

16 January 2019 by Clark 2 Comments

Of late, a lot of my work has been designing learning design. Helping orgs transition their existing design processes to ones that will actually have an impact. That is, someone’s got a learning design process, but they want to improve it. One idea, of course, is to replace it with some validated design process. Another approach, much less disruptive, is to find opportunities to fine tune the design. The idea is to find the minimal set of changes that will yield the maximal benefit. So what are the likely inflection points?  Where am I finding those spots for redesigning?  It’s about good learning.

Starting at the top, one place where organizations go wrong right off the bat is the initial analysis for a course. There’s the ‘give us a course on this’, but even if there’s a decent analysis the process can go awry. Side-stepping the big issue of performance consulting (do a reality check: is this truly a case for a course), we get into working to create the objectives. It’s about how you work with SMEs. Understanding what they can,  and can’t, do well means you have the opportunity to ensure that you get the right objectives to design to.

From there, the most meaningful and valuable step is to focus on the practice. What are you having learners  do, and how can you change that?  Helping your designers switch to good  assessment writing is going to be useful. It’s nuanced, so the questions don’t  seem that different from typical ones, but they’re much more focused for success.

Of course, to support good application of the content to develop abilities, you need the right content!  Again, getting designers to understand what the nuances of useful examples from just stories isn’t hard but rarely done. Similarly knowing why you want  models and not just presentations about the concept isn’t fully realized.

Of course, making it an emotionally compelling experience has learning impact as well. Yet too often we see the elements just juxtaposed instead of integrated. There  are systematic ways to align the engagement and the learning, but they’re not understood.

A final note is knowing when to have someone work alone, and when some collaboration will help.  It’s not a lot, but unless it happens at the right time (or happens at all) can have a valuable contribution to the quality of the outcome.

I’ve provided many resources about better learning design, from my 7 step program white paper  to  my deeper elearning series for Learnnovators.  And I’ve a white paper about redesigning as well. And, of course, if you’re interested in doing this organizationally, I’d welcome hearing from you!

One other resource will be my upcoming workshop at the Learning Solutions conference on March 25 in Orlando, where we’ll spend a day working on learning experience design, integrating engagement and learning science.  Of course, you’ll be responsible for taking the learnings back to your learning process, but you’ll have the ammunition for redesigning.  I’d welcome seeing you there!

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok