Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Symbiosis

20 May 2015 by Clark Leave a Comment

One of the themes I‘ve been strumming in presentations is one where we complement what we do well with tools that do well the things we don‘t. A colleague reminded me that JCR Licklider  wrote of this decades ago (and I‘ve similarly followed the premise from the writings  of Vannevar Bush, Doug Engelbart, and Don Norman, among others).

We‘re already seeing this.     Chess has changed from people playing people, thru people playing computers and computers playing computers, to computer-human pairs playing other computer-human pairs. The best competitors aren‘t the best chess players or the best programs, but the best pairs, that is the player and computer that best know how to work together.

The implications are to stop trying to put everything in the head, and start designing systems that complement us in ways that assure that the combination is the optimized solution to the problem being confronted. Working backwards [], we should decide what portion should be handled by the computer, and what by the person (or team), and then design the resources and then training the humans to use the resources in context to achieve the goals.

Of course, this is only in the case of known problems, the ‘optimal execution‘ phase of organizational learning. We similarly want to have the right complements to support the ‘continual innovation‘ phase as well. What that means is that we have to be providing tools for people to communicate, collaborate, create representations, access and analyze data, and more. We need to support ways for people to draw upon and contribute to their communities of practice from their work teams. We need to facilitate the formation of work teams, and make sure that this process of interaction is provided with just the right amount of friction.

Just like a tire, interaction requires friction. Too little and you go skidding out of control. Too much, and you impede progress. People need to interact constructively to get the best outcomes. Much is known about productive interaction, though little enough seems to make it‘s way into practice.

Our design approaches need to cover the complete ecosystem, everything from courses and resources to tools and playgrounds. And it starts by looking at distributed cognition, recognizing that thinking isn‘t done just in the head, but in the world, across people and tools. Let‘s get out and start playing instead of staying in old trenches.

Ch-ch-ch-changes

19 May 2015 by Clark 5 Comments

Is there an appetite for change in L&D? That was the conversation I‘ve had with colleagues lately. And I have to say that that the answer is mixed, at best.

The consensus is that most of L&D is comfortably numb. That L&D folks are barely coping with getting courses out on a rapid schedule and running training events because that‘s what‘s expected and known. There really isn‘t any burning desire for change, or willingness to move even if there is.

This is a problem. As one commented: “When I work with others (managers etc) they realise they don’t actually need L&D any more”. And that‘s increasingly true: with tools to do narrated slides, screencasts, and videos in the hands of everyone, there‘s little need to have the same old ordinary courses coming from L&D. People can create or access portals to share created and curated resources, and social networks to interact with one another. L&D will become just a part of HR, addressing the requirements – onboarding and compliance – everything else will be self-serve.

The sad part of this is the promise of what L&D could be doing. If L&D started facilitating learning, not controlling it, things could go better. If L&D realized it was about supporting the broad spectrum of learning, including self-learning, and social learning, and research and problem-solving and trouble-shooting and design and all the other situations where you don‘t know the answer when you start, the possibilities are huge. L&D could be responsible for optimizing execution of the things they know people need to do, but with a broader perspective that includes putting knowledge into the world when possible. And L&D could be also optimizing the ability of the organization to continually innovate.

It is this possibility that keeps me going. There‘s the brilliant world where the people who understand learning combine with the people who know technology and work together to enable organizations to flourish. That‘s the world I want to live in, and as Alan Kay famously said: “the best way to predict the future is to invent it.” Can we, please?

Trojan Mice?

6 May 2015 by Clark Leave a Comment

One of the mantras of the Learning Organization is that there should be experimentation.  This has also become, of course, a mantra of the Revolution as well.  So the question becomes, what sort of experiments should we be considering?

First, for reasons both pragmatic and principled, these are more likely to be small experiments than large.  On principled reasons, even large changes are probably better off implemented as small steps. On pragmatic reasons, small changes can be built upon or abandoned as outcomes warrant.  These small changes have colloquially been labeled ‘trojan mice‘, a cute way to capture the notion of change via small incursions.

The open question, then, is what sort of trojan mice might be helpful in advancing the revolution?  We might think of them in each of the areas of change: formal, performance support, social, culture, etc.  What are some ideas?

In formal, we might, for one, push back on taking orders.  For instance,  we might start asking about measures that any initiatives will be intended to address. We could also look to implementing some of the Serious eLearning Manifesto ideas. Small steps to better learning design.

For performance support, one of the first small steps might be to even  do  performance support, if you aren’t already. If you are, maybe look to broadening the media you use (experiment with a video, an annotated sequence of pictures, or an ebook).  Or  maybe try creating a portal that is user-focused, not business-silo structured.

In the social area, you might first have to pilot an exterior social network if there isn’t one. If there is, you might start hosting activities within it.  A ‘share your learning lunch’ might be a fun way to talk about things, and bring out meta-learning.   Certainly, you could start instituting the use  within L&D.

And with culture, you might start encouraging people to share how they work; what resources they use.  Maybe film the top performers in a group giving a minute or two talk on how they do what they do.  It’d be great if you could get some of the leadership to start sharing, and maybe do a survey of what your culture actually is.

The list goes on: in tech you might try some microlearning, a mobile experiment, or considering a content model  (ok, not actually  build one, that’s a big step ;).  In strategy, you might start gathering data about what the overall organization goals are, or what initiatives in infrastructure have been taken elsewhere in the org or are being contemplated.

The point is to start taking  some small steps.  So, I’m curious, what small steps have you tried, or what ones might you think of and suggest?

Pushing back

5 May 2015 by Clark 2 Comments

In a recent debate with my colleague on the Kirkpatrick model, our host/referee asked me whether I’d push back on a request for a course. Being cheeky, I said yes, but of course I know  it’s harder than that.  And I’ve been mulling the question, and trying  to think of a perhaps more pragmatic (and diplomatic ;) approach.  So here’s a cut at it.

The goal is not to stay with just ‘yes’, but to followup.  The technique is to drill in for more information under the guise of ensuring you’re making the  right  course. Of course,  really you’re trying to determine whether there really is a need for a course at all, or maybe a job aid or checklist instead will do, and if so what’s critical to success.  To do this, you need to ask some pointed questions with the demeanor of being professional and helpful.

You might, then, ask something like “what’s the problem you’re trying to solve” or “what will the folks taking this course be able to do that they’re not doing now”.  The point is to start focusing on the real performance gap that you’re addressing (and unmasking if they don’t really know).  You  want to keep away from the information that they think needs to be in the head, and focus in on what decisions people can make that they can’t make now.

Experts can’t tell you what they actually do, or at least about 70% of it, so you need to drill in more about behaviors, but at this point you’re really trying to find out what’s not happening that should be.  You can use the excuse that “I just want to make sure we do the  right course” if there’s some push back on your inquiries, and you may also have to stand up for your requirements on the basis that you have expertise in your area and they have to respect that just as you respect their expertise in their area (c.f. Jon Aleckson’s  MindMeld).  

If what you discover does end up being about information, you might ask about “how fast will this information be changing”, and “how much of this will be critical to making better decisions”.  It’s hard to get information into the head, and it’s a futile effort if it’ll be out of date soon and it’s an expensive one if  it’s large amounts and arbitrary. It’s also easy to think that information will be helpful (and the nice-to-know as well as the must), but really you should be looking to put information in the world if you can. There are times when it has to be in the head, but not as often as your stakeholders and SMEs think.  Focus on what people will  do differently.

You also want to ask “how will we know the course is working”.  You can ask about what change would be observed, and should talk about how you will measure it.  Again, there could be pushback, but you need to be prepared to stick to your guns.  If it isn’t going to lead to some measurable delta, they haven’t really thought it through.  You can help them here, doing some business consulting on ROI for them. And here’s it’s not a guise, you really are being helpful.

So I think the answer can be ‘yes’, but that’s not the end of the conversation.  And this is the path to start demonstrating that you are about business.  This may be the path that starts getting your contribution to the organization to start being strategic. You’ll have to start being about more than efficiency metrics (cost/seat/hour; “may as well weigh ’em”) and about how you’re actually impacting the business. And that’s a good thing.  Viva la Revolucion!

Why models matter

21 April 2015 by Clark 2 Comments

In the industrial age, you really didn’t need to understand why you were doing what you were doing, you were just supposed to do it.  At the management level, you supervised behavior, but you didn’t really set strategy. It was only at the top level where you used the basic principles of business to run your organization.  That was then, this is now.

Things are moving faster, competitors are able to counter your advances in months, there’s more information, and this isn’t decreasing.  You really need to be more agile to deal with uncertainty, and you need to continually innovate.   And I want to suggest that this advantage comes from having a conceptual understanding, a model of what’s happening.

There are responses we can train,  specific ways of acting in context.  These aren’t what are most valuable any more.  Experts, with vast experience responding in different situations, abstract models that guide what they do, consciously or unconsciously (this latter is a problem, as it makes it harder to get at; experts can’t tell you 70% of what they actually do!).  Most people, however, are in the novice to practitioner range, and they’re not necessarily ready to adapt to changes,  unless we prepare them.

What gives us the ability to react are having models that explain  the underlying causal relations as we best understand them, and then support in applying those models in different contexts.  If we have models, and see how those models guide performance in context A, then B, and then we practice applying it in context C and D (with model-based feedback), we gradually develop a more flexible ability to respond. It’s not subconscious, like experts, but we can figure it out.

So, for instance, if we have the rationale behind a sales process, how it connects to the customer’s mental needs and the current status, we can adapt it to different customers.  If we understand the mechanisms of medical contamination, we can adapt to new vectors.  If we understand the structure of a cyber system, we can anticipate security threats. The point is that making inferences on models is a more powerful basis than trying to adapt a rote procedure without knowing the basis.

I recognize that I talk a lot in concepts, e.g. these blog posts and diagrams, but there’s a principled reason: I’m trying to give you a flexible basis, models, to apply to your own situation.  That’s what I do in my own thinking, and it’s what I apply in my consulting.  I am a collector of models, so that I have more tools to apply to solving my own or other’s problems.   (BTW, I use concept and model relatively interchangeably, if that helps clarify anything.)

It’s also a sound basis for innovation.  Two related models (ahem) of creativity say that new ideas are either the combination of two different models or an evolution of an existing one.  Our brains are pattern matchers, and the more we observe a pattern, the more likely it will remind us of something, a model. The more models we have to match, the more likely we are to find one that maps. Or one that activates another.

Consequently, it’s also one  of the things I push as a key improvement to learning design. In addition to meaningful practice, give the concept behind it, the why, in the form of a model. I encourage you to look for the models behind what you do, the models in what your presented, and the models in what your learners are asked to do.

It’s a good basis for design, for problem-solving, and for learning.  That, to me, is a big opportunity.

Defining Microlearning?

14 April 2015 by Clark 8 Comments

Last week on the #chat2lrn twitter chat, the topic was microlearning. It was apparently prompted by this post by Tom Spiglanin which does a pretty good job of defining it, but some conceptual confusion showed up in the chat that makes it clear there’s some work to be done.  I reckon there may be a role for the label and even the concept, but I wanted to take a stab at what it is and isn’t, at least on principle.

So the big point to me is the word ‘learning’.  A number of people opined about accessing a how-to video, and let’s be clear: learning doesn’t have to come from that.   You could follow the steps and get the job done and yet need to access it again if you ever needed it. Just like I can look up the specs on the resolution of my computer screen, use that information, but have to look it up again next time.  So it could be just performance support, and that’s a  good thing, but it’s not learning.  It suits the notion of micro content, but again, it’s about getting the job done, not developing new skills.

Another interpretation was little bits of components of learning (examples, practice) delivered over time. That is learning, but it’s not microlearning. It’s distributed learning, but the overall learning experience is macro (and much more effective than the massed, event, model).  Again, a good thing, but not (to me) microlearning.  This is what Will Thalheimer calls subscription learning.

So, then, if these aren’t microlearning, what is?  To me, microlearning has to be a small but complete learning experience, and this is non-trivial.  To be a full learning experience, this requires a model, examples, and practice.  This could work with very small learnings (I use an example of media roles in my mobile design workshops).  I think there’s a better model, however.

To explain, let me digress. When we create formal learning, we typically take learners away from their workplace (physically or virtually), and then create contextualized practice. That is, we may present concepts and examples (pre- via blended, ideally, or less effectively in the learning event), and  then we create practice scenarios. This is hard work. Another alternative is more efficient.

Here, we layer the learning on top of the work learners are already doing.  Now, why isn’t this performance support? Because we’re not just helping them get the job done, we’re explicitly turning this into a learning event by not only scaffolding the performance, but layering on a minimal amount of conceptual material that links what they’re doing to a model. We (should) do this in examples and feedback on practice, now we can do it around real work. We can because (via mobile or instrumented systems) we know where they are and what they’re doing, and we can build content to do this.  It’s always been a promise of performance support systems that they could do learning on top of helping the outcome, but it’s as yet seldom seen.

And the focus on minimalism is good, too.  We overwrite and overproduce, adding in lots that’s not essential.  C.f. Carroll’s Nurnberg Funnel or Moore’s Action Mapping.  And even for non-mobile, minimalism makes sense (as I tout under the banner of the Least Assistance Principle).  That is, it’s really not rude to ask people (or yourself as a designer) “what’s the least I can do for you?”  Because that’s what people generally really prefer: give me the answer and let me get back to work!

Microlearning as a phrase has probably become current (he says, cynically) because elearning providers are touting it to sell the ability of their tools to now deliver to mobile.   But it can also be a watch word to emphasize thinking about performance support, learning ‘in context’, and minimalism.  So I think we may want to continue to use it, but I suggest it’s worthwhile to be very clear what we mean by it. It’s not courses on a phone (mobile elearning), and it’s not spaced out learning, it’s small but useful full learning experiences that can fit by size of objective or context ‘in the moment’.  At least, that’s my take; what’s yours?

Starting from the end

8 April 2015 by Clark Leave a Comment

Week before last, Will Thalheimer and I had another one of our ‘debates’, this time on the Kirkpatrick model (read the comments, too!).  We followed up last week with a live debate.  And in the course of it I said something that I want to reiterate and extend.

The reason I like the Kirkpatrick model is it emphasizes  one thing that I see the industry failing to do.  Properly applied (see below), it  starts with the measurable change  you need  to see in the organization, and you work backwards from there. You go back to the behavior change you need in the workplace to address that measure, and from there to the changes in training and/or resources to create that behavior change.  The important point is starting with a business metric.  No ‘we need a course on this’, but instead: “what business goal are we trying to impact”.

Note: the solution can just be a tool, it doesn’t have to always be learning.  For example, if what people need to access accurately are the specific product features of one of a multitude of solutions that are in rapid flux (financial packages, electronic hardware, …), trying to get it in the head accurately isn’t a good goal. Having people able to access the information ‘in the head’ is an exercise in futility, and you’re better off putting the information ‘in the world’.  (Which is why I want to change from Learning & Development to Performance & Development, it’s not about learning, it’s about doing!)

The problems with Kirkpatrick are several.  For one, even he admitted he numbered it wrong.  The starting point is numbered ‘four’, which misleads people.  So we get the phenomena that people do stage 1, sometimes stage 2, rarely do they get to stage 3, and stage 4 is almost non-existent, according to ATD research.  And stage 1, as Will rightly points out, is essentially worthless, because the correlation between what learners think of the learning and the actual impact is essentially zero!  Finally, too often Kirkpatrick is wrongly considered as only to evaluate training (even the language on the site, as the link above will show you, talks only about training). It  should be about the impact of an intervention  whatever the means (see above).  And the impact is what the Kirkpatrick model properly is about, as I opined in the blog debate.

So, in the live debate, I said I’d be happy for any other model that focused on working backwards. And was reminded that, well, I proposed just that a while ago!  The blog post is the short version, but I also wrote this rather longer and more rigorous  paper  (PDF), and  I’m inclined think it’s one of my more important  contributions to design (to date ;). It’s a fairly thorough look at the design process  and where we go wrong (owing to our cognitive architecture), and a proposal for an alternative approach based upon sound principles.   I welcome your thoughts!

Labeling 70:20:10

7 April 2015 by Clark 7 Comments

In the Debunker Club, a couple of folks went off on the 70:20:10 model, and it prompted some thoughts.  I thought I’d share them.

If you’re not familiar with 70:20:10, it’s a framework for thinking about workplace learning that suggests we need to recognize that the opportunity  is about much more than courses. If you ask people how they learned the things they know to do in the workplace, the  responses suggest that somewhere around 10% came from formal learning, 20% from informal coaching and such, and about 70% from trial and error.  Note the emphasis on the fact that these numbers aren’t exact, it’s just an indication (though considerable evidence suggests that the contribution  of formal learning is somewhere between 5 and 20%, with evidence from a variety of sources).

Now, some people complain that the numbers can’t be right, no one gets perfect 10 measurements. To be fair, they’ve been fighting against the perversion of Dale’s Cone, where someone added numbers on that were bogus but have permeated learning for decades and can’t seem to be exterminated. It’s like zombies!  So I suspect they’re overly sensitive to whole  numbers.

And I like the model!  I’ve used it to frame some of my work, using it as a framework to think about what  else we can do to support performance. Coaching and mentoring, facilitating social interaction, providing challenge goals, supporting reflection, etc.  And again to justify accelerated organizational outcomes.

The retort I hear is that “it’s not about the numbers”, and I agree.  It’s just  a  tool to help shake people out of the thought that a course is the only solution to all needs.  And, outside the learning community, people  get it.  I have heard that, over presentations to hundreds of audiences of executives and managers, they all recognize that the contributions to their success came largely from sources other than courses.

However, if it’s not about the numbers, maybe calling it the 70:20:10 model may be a problem.  I really like Jane Hart’s diagram about Modern Workplace Learning as another way to look at it, though I really want to go beyond learning  too.  Performance support may achieve outcomes in ways that don’t require or deliver any learning,  and that’s okay. There’re times when it’s better to have knowledge in the head than in the world.

So, I like the 70:20:10 framework, but recognize that the label may be a barrier. I’m just looking for any tools I can use to help people start thinking ‘outside the course’.  I welcome suggestions!

Measurement?

2 April 2015 by Clark Leave a Comment

Sorry for the lack of posts this week; Monday was shot while I migrated my old machine to a new one (yay)!  Tuesday was shot with catching up. Wed was shot with lost internet, and  trying to migrate the lad to my old machine.  So today I realize I haven’t posted all week (though you got extra from me last week ;)!  So here’s one reflection on the conference last week.

First, if you haven’t seen it, you should check out the debate I had with the good Dr. Will Thalheimer over at his blog about the Kirkpatrick model.  He’s upset with it as it’s not permeated by learning, and I argue that it’s role is impact, not learning design (see my diagram at the end).  Great comments, too!  We’ll be doing a hangout on it on Friday the 3rd of April.

The other interesting thing that happened is on the first day I was cornered three times for deep conversations on measurement. This is a good thing, mostly, but one in particular was worth a review.  The discussion for this last centered on whether measurement was needed for most initiatives, and I argued yes, but with a caveat.

There was an implicit thought that for many things that measurement wasn’t needed.  In particular, for informal learning when we’ve got folks successfully developed as effective self-learners and a good culture, we don’t need to measure. And I agree, though we might want to track (via something like the xAPI) to see what things are effective or not.

However, I did still think that any formal interventions, whether courses, performance support, or even specific social initiatives should be measured. First, how are you going to tune it to get it right? Second, don’t you want to attach the outcome to the intervention? I mean, if you’re doing performance consulting, there should be a gap you’re trying to address or why are you bothering?  If there is a gap, you have a natural metric.

I am pleased to see the interest in measurement, and I hope we can start getting some conceptual clarity, some good case studies, and really help make our learning initiatives into strategic contributions to the organization.  Right?

Tech Limits?

24 March 2015 by Clark Leave a Comment

A couple of times last year, firms with some exciting learning tools approached me to talk about the market.  And in both cases, I had to advise them that there were some barriers they’d have to address. That was brought home to me in another conversation, and it makes me worry about the state of our industry.

So the first tool is based upon a really sound pedagogy that is consonant with my  activity-based learning approach.  The basis is giving learners assignments very much like the assignments they’ll need to accomplish in the workplace, and then resourcing them to succeed.  They wanted to make it easy for others to create these better learning designs (as part of a campaign for better learning). The only problem was, you had to learn the design approach as well as the tool. Their interface wasn’t ready for prime time, but the real barrier was getting people to be able to use a new tool. I indicated some of the barriers, and they’re reconsidering (while continuing to develop content against this model as a service).

The second tool supports virtual role plays in a powerful way, having smart agents that react in authentic  ways. And they, too, wanted to provide an authoring tool to create them.  And again my realistic assessment of the market was that people would have trouble understanding the tool.  They decided to continue to develop the experiences as a service.

Now, these are somewhat esoteric designs, though the former  should be  the basis of our learning experiences, and the latter would be a powerful addition to support a very common and important type of interaction.  The more surprising, and disappointing, issue came up with a conversation earlier this year with a proponent of a more familiar tool.

Without being specific (I’ve not received permission to disclose the details in all of the above), this person indicated that  when training  a popular and fairly straightforward tool, that the biggest barrier wasn’t the underlying software model. I was expecting that too much of training was based upon rote assignments without an underlying model, and that is the case, but instead there was a more fundamental barrier: too many potential users just didn’t have sufficient computer skills!  And I’m not talking about programming code, but instead fundamental understandings of files and ‘styles‘ and other core computing elements just were not present in sufficient quantities in these would-be authors. Seriously!

Now I’ve complained before that we’re not taking learning design seriously, but obviously we’re compounded by a lack of fundamental computer skills.  Folks, this is  elearning, not chalk learning, not chalk talk, not  edoing, etc.  If you struggle to add new apps on your computer, or find files, you’re not ready to be an elearning developer.

I admit that I struggle to see how folks can assume that without knowledge of design, nor knowledge of technology, that they can still be elearning designers and developers. These tools are scaffolding  to allow your designs to be developed. They don’t  do  design, nor will they magically cover up for lacks of tech literacy.

So, let’s get realistic.  Learn about learning design, and get comfortable with tech, or please,  please, don’t do elearning.  And I promise not to do music, architecture, finance, and everything else I’m not qualified to. Fair enough?

 

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok