Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Listening

16 January 2018 by Clark 1 Comment

Listening, as I mentioned, in this case to  Guy Wallace.  As one of the premier promoters of evidence-based design, he responded to my question  about what to post on with:

Any “How Tos” using methods, tools and techniques that you‘ve found to work in L&D and Performance Improvement.

Since I am a fan of Guy’s work, I thought I should answer!  Now, obviously I don’t work in a typical L&D environment, so this list is somewhat biased. So I mentally ran through memorable projects from the past and looked for the success factors. Besides the best principles I usually advocate, here are a few tips and tricks that I’ve used over the years:

  • Engage.  Obviously, I wrote a book about this, but some of the quick things I do include:
    • embed the decisions they should be making in contexts where they make sense
    • as Henry Jenkins put it: “put the player in a role they  want to be in”
    • exaggerate the context
    • minimize the distractions
    • hook the learners in emotionally from the start
  • Decisions. I find that working with the objectives for learning projects, it’s critical to focus on the decisions that learners will ultimately be making.  I argue that what will make the difference for organizations, going forward, will be better decisions. And it keeps the discussion from focusing on knowledge. Knowledge is needed, but it’s not central.
  • Brainstorming. When working a strategy session with clients, I seed the discussion before hand with the challenges and background material, and ask that everyone think on their own before we begin collaboration.
  • Better ‘Pair and Share’.  If, in brainstorming, you should think individually before collectively, so should you do so in all forms. So I trialed a ‘pair and share’ where I asked everyone to:
    • think on the questions (asking for 2 things) first,
    • then share with another,
    • and try to reach agreement
    • (I polled the first audience I trialed it on, and they said that the discussion was better, FWIW).
  • Shared language. I have found it valuable, when starting a new project, to run a little ‘presentation’ where I present some of the models that I’m bringing to the table (that’s why I‘m there ;), so we’re starting from a shared understanding. And of course I’ve reviewed materials of theirs beforehand so I can use their terminology.  Educating clients is part of a Quinnovation engagement!
  • Test.  In making the Workplace of the Future project with Learnnovators,  we were barreling along full tilt, working on the second module, and I was getting increasingly worried about the fact that we hadn’t tested the first.  We finally did, relatively informally, but still got valuable feedback that changed our design somewhat. Similarly on other projects, get feedback early and often.
  • Visualize. My diagramming bent had me map out the workflow of a client’s production process, to identify opportunities to tweak the process to bring in better learning science with minimal interruption.  In general, I will often jump up to the whiteboard and try to represent what I’m hearing to see if it’s shared.
  • Prototype.  Similar to the above, I will often mock up what I’m thinking about (in sort of a ‘ape with a crayon’ level of fidelity), to help communicate the idea; e.g. some sort of walkthrough.  I find that only a percentage of the audience can imagine what the experience will be without getting somewhat concrete. (And, yes, they do then complain about the production values, despite the tradeoff of cost versus value.  Sigh.)
  • Get the context.  I generally try to understand the whole ecosystem (ala ‘the revolution‘) before I engage in specifics.  What are the goals, stakeholders, what’s already being done and by whom, etc. It’s important to re-contextualize ‘best principles’, and that requires  knowing the context.
  • Architecture. Thinking through things using a design thinking approach and a systems-thinking perspective, I’ve tried to think of platforms, not just solutions. It might be content architectures, ecosystem elements, but it’s thinking in terms of systems, not just tactics.
  • Pragmatism. One final approach that has been beneficial is thinking about how to approximate the best with a budget.  I used to talk about ‘what would you do if you had magic’, and then see how close you can get with the resources to hand. It’s a heuristic that often has led to an innovative yet viable solution.

Looking at them, I see that they generally reflect my overall focus on aligning what we do with how we think, work, and learn. Your thoughts?

2018 Trajectories

3 January 2018 by Clark Leave a Comment

Given my reflections on the past year, it’s worth thinking about the implications.  What trajectories can we expect if the trends are extended?  These are  not predictions (as has been said, “never predict anything, particularly the future”).  Instead, these are musings, and perhaps wishes for what could (even  should) occur.

I mentioned an interest in AR and VR.  I think these are definitely on the upswing. VR may be on a rebound from some early hype (certainly ‘virtual worlds’), but AR is still in the offing.  And the tools are becoming more usable and affordable, which typically presages uptake.

I think the excitement about AI will continue, but I reckon we’re already seeing a bit of a backlash. I think that’s fair enough. And I’m seeing more talk about Intelligence Augmentation, and I think that’s a perspective we continue to need. Informed, of course, by a true understanding of how we think, work, and learn.  We need to design to work  with us.  Effectively.

Fortunately, I think there are signs we might see more rationality in L&D overall. Certainly we’re seeing lots of people talking about the need for improvement. I see more interest in evaluation, which is also a good step. In fact, I believe it’s a good  first step!

I hope it goes further, of course. The cognitive perspective suggests everything from training & performance support, through facilitating communication and collaboration, to culture.  There are many facets that can be fine-tuned to optimize outcomes.Similarly, I hope to see a continuing improvement in learning engineering. That’s part of the reason for the Manifesto and the Quinnov 8.  How it emerges, however, is less important than that it  does.  Our learners, and our organizations, deserve nothing less.

Thus, the integration of cognitive science into the design of performance and innovation solutions will continue to be my theme.  When you’re ready to take steps in this direction, I’m happy to help. Let me know; that’s what I do!

Reflections on 2017

2 January 2018 by Clark Leave a Comment

The end of the calendar year, although arbitrary, becomes a time for reflection.  I looked back at my calendar to see what I’d done this past year, and it was an interesting review.  Places I’ve been and things I’ve done point to some common themes.  Such are the  nature of reflections.

One of the things I did was speak at a number of events. My messages have been pretty consistent along two core themes: doing learning better, and going beyond the course.  These were both presented at TK17 that started the year, and were reiterated, one or the other, through other ATD and Guild events.

With one exception. For my final ATD event of the year, I spoke on Artificial Intelligence (AI). It was in China, and they’re going big into AI. It’s been a recurrent interest of mine since I was an undergraduate. I’ve been fortunate to experience some seminal moments in the field, and even dabble.  The interest in AI does not seem to be abating.

Another persistent area of interest has been Augmented Reality (AR) and Virtual Reality (VR). I attended an event focused on Realities, and I continue to believe in the learning potential of these approaches. Contextual learning, whether building fake or leveraging real, is a necessary adjunct to our learning.  One AR post of mine even won an award!

My work continues to be both organizational learning, but also higher education. Interestingly, I spoke to an academic audience about the realities of workplace learning!  I also had a strategic engagement with a higher education institution on improving elearning.

I also worked on a couple of projects. One I mentioned last week, a course on better ID.  I’m still proud of the eLearning Manifesto (as you can see in the sidebar ;).  And I continue to want to help people do better using technology to facilitate learning.  I think the Quinnov 8 are a  good way.

All in all, I still believe that pursuing better and broader learning and performance is a worthwhile endeavor. Technology is a lovely complement to our thinking, but we have to do it with an understanding of how our brains work.  My last project from the year is along these lines, but it’s not yet ready to be announced. Stay tuned!

Pernicious problems

27 December 2017 by Clark 4 Comments

I’m using a standard for organizational learning quality in the process of another task.  Why or for whom doesn’t matter. What  does matter is that there are two problems in their standard that indicate we still haven’t overcome some pernicious problems.  And we need to!

So, for the first one, this is in their standard for developing learning solutions:

Uses blended models that appeal to a variety of learning styles.

Do you see the problem here?  Learning styles are debunked! There’s no meaningful and valid instrument to measure them, and no evidence that adapting to them is of use.  Appealing to them is a waste of time and effort. Design for the learning instead!  Yet here we’re seeing someone conveying legitimacy by communicating this message.

The second one is also problematic, in their standard for evaluation:

Reports typical L&D metrics such as Kirkpatrick levels, experimental models, pre- and post-tests and utility analyses.

This one’s a little harder to see. If you think about it, however, you should see that pre- and post-test measures aren’t  good measures.  What you’re measuring here is a delta, and the problem is, you would  expect a delta. It doesn’t really tell you anything. You shouldn’t have even bothered if the performance isn’t up to scratch! What you want to do is confirm that you’re achieving a higher level of performance set objectively. Are they now able to perform? Or how many are?  Doing the pre-post is like doing normative reference (e.g. grading on a curve) when you should be doing criteria-referenced performance.

And this is from an organization that’s purports to communicate L&D quality! These are both from their base level of operation, which means it’s acceptable. This is evidence that our problems aren’t just in practice, they’re pernicious; they’re present in the mindset of even the supposed experts. Is it any wonder the industry is having trouble?  And I haven’t rigorously reviewed the standard, I was merely using it (I wonder what I’d find if I did?).

Maybe I’m being too harsh. Maybe the wording doesn’t imply what I think it does.  But I’ll suggest that we need a bit more rigor, a bit more attention to science in what we do. What have I missed?

 

 

Expertise

21 December 2017 by Clark Leave a Comment

Expertise is an elusive thing. It comes from years of experience in a field.  However, it turns out that it doesn’t just accumulate. You need very specific practice and/or useful feedback to develop it.  And the more expertise one has, the better you are able to apply it to situations. Which has implications for what you do and when and how you do it.

Expertise is valuable. The properties of expertise include that it’s compiled away to be essentially automatic. Which implies it’s not accessible for conscious introspection. (Which is why experts quite literally cannot tell you what they do!)  On the other hand, their responses to situations in their area of expertise are likely to be as good as you can get.  They apply mental models they’ve developed to solve problems.

If you want to develop expertise as an individual, you need to understand how to practice.  Deliberate practice, as Ericsson details, is the key.  You need to practice at the limits of your ability, and consciously learn from the outcomes.  It’s not just doing the job, it’s pushing the boundaries, and actively reflecting.

If you want to develop expertise as an organization  internally, the situation is very much the same.  You need resources to develop people, and stretch assignments with feedback and coaching to optimally develop the expertise.

Of course, you can bring in expertise from outside, as well.  The question then becomes one of when and who.  You can contract out work, which makes sense when the activity isn’t part of your core ability.  Outsourcing to technology or external expertise is fine for things that are in areas that are well developed.

Otherwise, you can bring in consultants. The latter is particularly useful when you are moving in a new direction or want to deepen your understandings. A good consultant will work with you to not only help address the situation, but internally develop your own understanding. The key is working collaboratively and transparently. Yes, I’m a vested interest, but I believe these things are true on principle and should be in practice.

Expertise is core to situations you know you need expertise in, but also in those that are new. When you need innovation, you need expertise in the complementary areas that you are applying to address the situation.  You don’t want to develop learning  except  in the problem.  At least, that’s  my expert opinion. Which, of course, is on tap if needed ;).

Innovations

19 December 2017 by Clark Leave a Comment

Sparked by a colleague, I’m reading The Digital Transformation Playbook, by David Rogers. In the chapter on innovation, he talks about two types of experimentation: convergent and divergent. And I was reminded that I think of two types of innovations as well.  So what are they?

Experimentation

He talks about how experimentation is the key to innovation (in fact, the chapter title is Innovate by Rapid Experimentation). His point is that you need to be continually experimenting, rapidly.  And throughout the organization, not just in separate labs. Also, it’s ok to fail, as long as the lesson’s learned.  And then he distinguishes between two types of experimentation.

The first is convergent. Not surprisingly, this is when you’re trying to eliminate options and make a decision.  This is your classic A/B testing, for example. Here you might try out two or three different solutions, to see which one works best. You create the options, and have measures you’ll use to determine the answer.  You might ask: should we use a realistic video or a cartoon animation? A situation where there isn’t a principled answer, and you need to make a decision.

Divergent experimentation is, instead, exploratory. Here you give folks some ideas, or a prototype, and see what happens. You don’t know what you’ll get, but you’re eager to learn.  What would a scenario look like here?

Innovation

These roughly correspond to the two types of innovation I think of. One is the ‘we need to solve this’ type. I think of this as short-term innovation. Here we are problem-solving or trouble-shooting.  You bring together a team of relevant capabilities and otherwise as diverse as possible. You facilitate the process. And you’re likely to try convergent experimentation.

At the other end is the serendipitous, long-term innovation that happens because you create an environment where ideas can gestate.  You’ve got access to the adjacent possible, and the opportunities to explore and share. It’s safe to experiment and fail.  People are  supposed to take time to reflect! This is more closely aligned to divergent experimentation.

Note that this is  all learning, as you don’t know the answers when you start!  The success of organizational learning, however, is a product of both. You need to solve the problems you know you have, and allow for ideas to generate solutions to problems you didn’t know you had.  Or, more optimistically, to search through idea spaces for opportunities you didn’t know to look for.

Rogers is right that continual experimentation is key.  It has to become baked into how you do what you do.  Individually, and organizationally.  And you can’t really get it unless you start practicing it yourself.  You need to continually challenge yourself, and try things both to fix the problems, and to explore things that are somewhat tangential. Your own innovations will be key to your ability to foster them elsewhere.

Too many orgs are only focused on the short-term.  And while that may solve shareholder return expectations, it’s not a receipt for longer-term organizational survival.  You need both types of innovations. So, the question is whether you can assist your org in making a shift to the serendipitous environment.  Are you optimizing your innovation?

Higher Ed & Job Skills?

13 December 2017 by Clark 2 Comments

I sat in on a twitter chat yesterday, #DLNChat, that is a higher ed tech focused group (run by EdSurge). The topic was the link between higher ed and job skills, and I was a wee bit cynical. While I think there are great possibilities, the current state of the art leaves a lot to be desired.

So, I currently don’t think higher ed does a good job of preparation for success in business. Higher ed focuses too much on knowledge, and uses assignments that don’t resemble the job activities.  Frankly, there aren’t too many essays in most jobs!

Worse, I don’t think higher ed does a good job of developing meta-cognitive and meta-learning skills. There is little attempt to bridge assignments  across courses, so your presentations in psychology 101 and sociology 202 and business 303 aren’t steadily tracked and developed. Similarly with research projects, or strategy, or… And there’re precious little (read: none) typically found where you actually make decisions like you would need to.

And, sadly, the use of technology isn’t well stipulated either. You might use a presentation tool, a writing tool, or a spreadsheet, maybe even collaboratively, but it’s not typically tied to external resources and data.

Yes, I know there are exceptions, and it may be changing somewhat, but it still appears to be the case. Research, write a paper, take a test.

Yet the role of developing higher skills is possible and valuable.  We could be providing more meaningful assignments, integrating meta-learning layers, and developing both meaningful skills and meta-skills.

This doesn’t have to be done at the expense of the types of things professors believe are important, but just with a useful twist in the way the knowledge is applied. It might lead to a revision of the curriculum, at least somewhat, but I reckon it’d likely be for the better ;).

Our education system, both K12 and higher-ed, isn’t doing near what it could, and should. As Roger Schank says, only two things wrong: what we teach, and how we teach it.  We can do better. Will we?

Conceptual Clarity

6 December 2017 by Clark 1 Comment

Ok, so I can be a bit of a pedant.  Blame it on my academic background, but I believe conceptual clarity is important! If we play fast and loose with terminology, we can be be convinced of something without truly understanding it.  Ultimately, we can waste money chasing unwarranted directions, and worse, perhaps even do wrong by our learners.

Where do the problems arise?  Sometimes, it’s easy to ride a bizbuzz bandwagon.  Hey, the topic is hot, and it sounds good.  Other times, it’s just too hard to spend the effort. Yet getting it wrong ends up meaning you’re wasting resources.

Let’s be clear, I’m not talking myths. Those abound, but here I’m talking about ideas that are being used relatively indiscriminately, but in at least one interpretation there’s real value.  The important thing is to separate the wheat from the chaff.

Some concepts that are running around recently and could use some clarity are the following:

Microlearning.  I tried to be clear about this here. In short, microlearning is about small chunks where the learning aggregates over time.  Aka spaced learning.  But other times, people really mean performance support (just-in-time help to succeed in the moment). What you don’t want is someone pretending it’s so unique that they can trademark it.

70:20:10.  This is another that some people deride, and others find value in. I’ve also talked about this.   The question is why they differ, and my answer is that the folks who use it as a way to think more clearly about a whole learning experience find value. Those who fret about the label are missing the point.  And I acknowledge that the label is a barrier, but that horse has bolted.

Neuro- (aka brain- ). Yes, our brains are neurologically based. And yes, there are real implications. Some.  Like ‘the neurons that fire together, wire together’.  And yet there’re a whole lot of discussions about neuro that are really at the next higher level: cognitive.  This is just misleading folks to make it sound more scientific.

Unlearning. There’s a lot of talk about unlearning, but in the neurological sense it doesn’t make sense. You don’t unlearn something.  As far as we can tell, it’s still there, just increasingly hard to activate. The only real way to ‘unlearn’ is to learn some other response to the same situation.  You learn ‘over’ the old learning. Or overlearn.  But not unlearn. It’s an unconcept.

Gamification. This is actually the one that triggered this post. In theory, gamification is the application of game mechanics to learning.  Interestingly, Raph Koster wrote that what makes games fun are that they are intrinsically about learning!  However, there are important nuances.  It’s not just about adding PBL (points, badges, and leaderboards). These aren’t bad things, but they’re secondary.  Designing the intrinsic action around the decisions learners need to acquire is a deeper and more meaningful implication.  Yet people tend to ignore the latter because it’s ‘harder’.  Yet it’s really just about good learning design.

There are more, of course, but hopefully these illustrate the problem. (What are yours?)  Please, please, be professional and take the time to get clear about our cognitive architecture enough to ensure that you can make these distinctions on your own. We need the conceptual clarity!  Hopefully then we can reserve excitement for ideas that truly add value.

Before the Course

29 November 2017 by Clark 6 Comments

It appears that, too often, people are building courses when they don’t need to (or, more importantly, shouldn’t).  I realize that there are pressures to make a course when one is requested, including expectations and familiarity, but really, you should be doing some initial thinking about what makes sense.  So here’s a rough guide about the thinking you should do  before you course.

FlowchartYou begin with a performance problem.  Something’s not right: calls take too long, sales success rate is too low, there’re too many errors in manufacturing.  So it must need training, right?  Er, no.  There’s this thing that’s called ‘performance consulting‘ that talks about identifying the gaps that could be preventing the desirable outcomes, and they’re not  all about gaps that training meets.  So we need to triage, and see what’s broken and what’s the priority.

To start, people can simply not  know what they’re supposed to do.  That may seem obvious, but it can in fact be the case.  Thus, there’s a need to communicate. Note that this and all of these are more complex than just ‘communicate’. There are the issues about who needs to communicate, and when, and to whom, etc.  But it’s  not (at least initially) a training problem.

If they do know, and could do it but aren’t, the problem isn’t going to be solved by training.  As someone once put it “if they could do it if their life depended on it”, then there’s something else going on. If they’re not following safety procedures because they’re too onerous, a course on it isn’t going to fix it. You need to address their motivation.

Now, if they can’t do it, then could they do it if they had the right tools, or more people, or more time? In other words, is it a resource problem?  And, in one way I like to think about it: can we put the solution in the world, instead of in the head?  Will lookup tables, checklists, step-by-step guides or videos solve the problem? Or even connections to other folks! (There are times when it doesn’t make sense to course or even job-aid; e.g. if it’s changing too fast, or too unique, or…)

And, of course, if you don’t have the right people, training still may not work. If they need to meet certain criteria, but don’t, training won’t solve it.  Training can’t fix color-blindness or lack of height, for instance.

Finally, if the prior solutions won’t solve it, and there’s a serious skill gap, then it’s time for training.  And not just knowledge dump, of course, but models and examples and meaningful (and spaced) practice.

Again, these are all abbreviated, and this is oversimplified.  There’s more depth to be unpacked, so this is just a surface level way to represent that a course isn’t always the solution.  But before you course, consider the other solutions. Please.

eLearning Land

28 November 2017 by Clark 1 Comment

This post is just a bit of elearning silliness, parodying our worst instincts…

Welcome back my friends, to the show that never ends. We’re so glad you could attend. Come inside, come inside! – Emerson, Lake & Palmer: Karn Evil 9,  1st Impression, Part 2.

It’s so good to see you, and I hope you’re ready for fun. Let’s introduce you to the many attractions to be found here.  We’ve got entertainment suitable for all ages, and wallets!  You can find something you like here, and for an attractive cost.

snake oil salesmanTo start, we have the BizBuzz arcade. It’s a mirror maze, where all things look alike. Microlearning, contextual performance support, mobile elearning, chunking, just-in-time, it’s all there.  Shiny objects appear and disappear before your eyes!  Conceptual clarity is boring, it’s all about the  sizzle.

And over here is the Snake Oil Pool.  It’s full of cures for what ails you!  We’ve got potions and lotions and aisles of styles.  It’s slippery, and unctuous; you can’t really get a handle on it, so how can you go wrong?  Apply our special solution, and your pains go away like magic.  Trust us.

Step right up and ride the Hype Balloon!  It’s a quick trip to the heights, held aloft by empty promises based upon the latest trends: neuro/brain-based, millennial/generations, and more.  It doesn’t matter if it holds water, because it’s lighter than air!

Don’t forget the wild Tech Lifecycle ride. You’ll go up, you’ll go down, you’ll take unpredictable twists, followed by a blazing finale. Get in line early!  You’ll leave with a lighter pocketbook, and perhaps a slight touch of nausea, but no worries, it was fun while it lasted.

Come one, come all! We’ll help you feel better, even if when you leave things aren’t any different. You’ll at least have been taken for a ride.  We’ll hope to see you again soon.

This was a jest, this was only a jest. If this were a real emergency, I’d write a book or something. Seriously, we do have to pay attention to the science in what we’re doing, and view things with a healthy skepticism.  We now return you to your regularly scheduled blog, already in progress.  

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok