Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Aligning with us

22 March 2016 by Clark Leave a Comment

One of the realizations I had in writing the Revolutionize L&D book was how badly we’re out of synch with our brains. I think alignment is a big thing, both from the Coherent Organization perspective of having our flows of information aligned, and in processes that help us move forward  in ways that reflect  our humanity.

In short, I believe we’re out of alignment with our views on how we think, work, and learn.  The old folklore that represents the thinking that still permeates L&D today is based upon outdated models. And we really have to understand these differences if we’re to get better.

AligningThe mistaken belief about thinking is that it’s all done in our head. That is, we keep the knowledge up there, and then when a context comes in we internalize it and make a  logical  decision and then we act.  And what cognitive science says is that this isn’t really the way it works.  First, our thinking isn’t all in our heads. We distribute it across representational tools like spreadsheets,  documents, and (yes) diagrams.  And we don’t make logical decisions without a lot of support or expertise. Instead, we make quick decisions.  This means that we should be looking at tools to support thinking, not just trying to put it all in the head. We should be putting as much in the world as we can, and look to scaffold our processes as well.

It’s also this notion that we go away and come up with the answer, and that the individual productivity is what matters.  It turns out that most innovation, problem-solving, etc, gets better results if we do it together.  As I often say “the room is smarter than the smartest person in the room  if you manage the process right“.  Yet, we don’t.  And people work better when they understand why what  they’re doing is  important and they care about it. We should be looking at ways to get people to work together more and better, but instead we still see hierarchical decision making, restrictive  cultures, and more.

And, of course, there still persists this model that information dump and knowledge test will lead to new capabilities.  That’s a low probability approach. Whereas if you’re serious about learning, you know it’s mostly about spacing contextualized application of that knowledge to solve problems. Instead, we see rapid elearning tools and templates that tart-up quiz questions.

The point being, we aren’t recognizing that which makes us special, and augmenting in ways that bring out the best.  We’re really running organizations that aren’t designed for humans.  Most of the robotic work should and will  get automated, so then when we need to find ways to use people to do the things they’re best at. It should be the learning folks, and if they’re not ready, well, they better figure it out or be left out!  So let’s get a jump on it, shall we?

Context Rules

15 March 2016 by Clark Leave a Comment

I was watching a blab  (a video chat tool) about the upcoming FocusOn Learning, a new event from the eLearning Guild. This conference combines their previous mLearnCon and Performance Support Symposium with the addition of  video.  The previous events have been great, and I’ll of course be there (offering a workshop on cognition for mobile, a mobile learning 101 session, and one on the topic of this post). Listening to folks talk about the conference led me to ponder the connection, and something struck me.

I find it kind of misleading that it’s FocusOn  Learning, given that performance support, mobile, and even video typically is  more about acting in the moment than developing over time.  Mobile device use tends to be more about quick access than extended experience.  Performance support is more about augmenting our cognitive capabilities. Video (as opposed to animation or images or graphics, and similar to photos) is about showing how things happen  in situ (I note that this is my distinction, and they may well include animation in their definition of video,  caveat emptor).  The unifying element to me is context.

So, mobile is a platform.  It’s a computational medium, and as such is the same sort of computational  augment that a desktop  is.  Except that it can be with you. Moreover, it can have sensors, so not just providing computational capabilities where you are, but  because of when and where you are.

Performance support is about providing a cognitive augment. It can be any medium – paper, audio, digital – but it’s about providing support for the gaps in our mental capabilities.  Our architecture is powerful, but has limitations, and we can provide support to minimize those problems. It’s about support  in the moment, that is, in context.

And video, like photos, inherently captures context.  Unlike an animation that represents conceptual distinctions separated from the real world along one or more dimensions, a video accurately captures what the camera sees happening.  It’s again about context.

And the interesting thing to me is that we can support performance in the moment, whether a lookup table or a howto video, without learning necessarily happening. And that’s  OK!  It’s also possible to use context to support learning, and in fact we can provide least material to augment a context than create an artificial context which so much of learning requires.

What excited me was that there was a discussion about AR and AI. And these, to me, are also about context.  Augmented Reality layers  information on top  of your current context.  And the way you start doing contextually relevant content delivery is with rules tied to content descriptors (content systems), and such rules are really part of an intelligently adaptive system.

So I’m inclined to think this conference is about  leveraging context in intelligent ways. Or that it can be, will be, and should be. Your mileage may vary ;).

Mindmapping

3 March 2016 by Clark 11 Comments

So, if you haven’t figured it out yet, I do mindmaps.  As I’ve recited before, I started doing it as a way to occupy my brain enough so I could listen to keynotes, but occasionally I use it to other purposes, such as representing structure or even planning. And thru  my esteemed colleague Jane Hart  (who’s Modern Workplace Learning book I’m going through and thoroughly impressed), I’m giving a mindmapping webinar today for a group of several universities in Ireland.  I thought I’d share what I’m presenting.

MindmappingMindmaps are a visual way of representing knowledge.  You use links to show connections between concepts (represented as nodes), developing a structural relationship.  A true semantic network would have those links labeled, as there are many different types of relationships (causal, precedence, hierarchical), but mindmaps typically have unlabeled links.  Still, mindmaps capture structural information in a visual way, that supports tapping into our powerful visual processing system. (This is the one I created for them to advertise the talk, it’s neither the order I ended up for them or am using here. ;)

You can add information to them; as a visual tool, you can add extra graphical information, like tables or charts, to augment the map.  You can similarly add color as a way to layer additional semantic information such as similarity. And the links can be plain or directional.  Importantly, while a mindmap can be essentially equivalent to an outline  if you maintain a strict tree structure, you can create a graph by having more complex links that generate loops.

The process of mindmapping is fairly straightforward: you have a central node, and then generate additional nodes and link them. I tend to go counter-clockwise, and include an arrow indicating that, because I’m capturing a linear presentation, but generating a static representation of information doesn’t have any directional requirement. I find that I have to frequently rearrange to fit the mindmap appropriately to the image, but that’s part of the benefit.

The evidence appears to show that mindmapping is superior to note-taking. I don’t do it all the time, but there are reasons to think you should.  The reasons, I believe, that it  is better is that you’re not just transcribing a presentation, but you’re actively parsing it to represent the structure. If you do take notes, you should be paraphrasing what you hear in your own words, to have active processing of the information. The additional effort to extract the structure as well is a form of valuable cognitive processing that elaborates the information.  Doing both, paraphrasing and extracting structure, would be a great way to really comprehend what you’re hearing.

As suggested, it’s helpful to mindmap talks, but it can also be a thinking tool, to analyze situations and sort out your thoughts or plan activities and add elements as you think of them. No real advantage over an outline, potentially (though the ability to add other graphics and to make non-strict maps may counter that), though I suspect some find the drawing and rearranging to be a nice physical overhead to facilitate reflecting.  And, of course, it can be an evaluation tool, asking someone to create their maps to see their understanding.

While there are dedicated tools for mindmapping, both applications and in the cloud, which will make creating and rearranging easier (I presume), you can use almost any drawing package (I use OmniGraffle). You could use Powerpoint or Keynote, and even pencil and paper (if it’s just for the processing) though it can be harder to revise.

So, that’s my riff on mind mapping.  I welcome your thoughts.

xAPI conceptualized

1 March 2016 by Clark 6 Comments

A couple of weeks ago, I had the pleasure of attending the xAPI Base Camp, to present on content strategy. While I was there, I remembered that I have some colleagues who don’t see the connection between xAPI and learning.  And it occurred to me that I hadn’t seen a good diagram that helped explain how this all worked.  So I asked and was confirmed in my suspicion. And, of course, I had to take  a stab at it.

xAPIWhat I was trying to capture was how xAPI tracked activity, and that could then be used for insight. I think one of the problems people have is that they think xAPI is a solution all in itself, but it is just a syntax for reporting.

So when A might demonstrate a capability at a particular level, say at the end of learning, or by affirmation from a coach or mentor, that gets recorded in a Learning Record Store. We can see that A and B demonstrated it, and C demonstrated a different level of capability (it could also be that there’s no record for C, or D, or…).

From there, we can compare that activity with results.  Our business intelligence system can provide   aggregated data of performance for A (whatever A is being measured on: sales data, errors, time to solve customer problems, customer satisfaction, etc). With that, we can see if there are the correlations we expect, e.g. everyone who demonstrated  this level of capability has reliably better performance than those who didn’t.  Or whatever you’re expecting.

Of course, you can mine the data too, seeing what emerges.  But the point is that there are a wide variety of things we might track (who touched this job aid, who liked this article, etc), and a wide variety of impacts we might hope for.  I reckon that you should plan what impacts you expect from your intervention, put in checks to see, and then see if you get what you intended.  But we can look at a lot more interventions than just courses. We can look to see if those more active in the community perform better, or any other question tied to a much richer picture than we get other ways.

Ok, so you can do this with your own data generating mechanisms, but standardization has benefits (how about agreeing that red means stop?).  So, first, does this align with your understanding, or did I miss something?  And, second does this help, at all?

When to gamify?

24 February 2016 by Clark Leave a Comment

I’ve had lurking in my ‘to do’ list a comment about doing a post on  when to gamify. In general, of course, I avoid it, but I have to acknowledge there are times when it makes sense.  And someone challenged me to think about what those circumstances are. So here I’m taking a principled shot at it, but I also welcome your thoughts.

To be clear, let me first define what gamification is  to me.  So, I’m a big fan of serious games, that is when you wrap meaningful decisions into contexts that are intrinsically meaningful.  And I can be convinced that there are times when tarting up memory practice with quiz-show window-dressing makes sense, e.g.  when it has to be ‘in the head’.  What I typically refer to as  gamification, however, is where you use external resources, such as scores, leaderboards, badges, and rewards to support behavior you want to happen.

I happened to hear a gamification expert talk, and he pointed out some rules about what he termed ‘goal science’.  He had five pillars:

  1. that  clear goals makes people feel connected and aligns the organization
  2. that working on goals together (in a competitive sense ;) makes them feel supported
  3. that feedback helps people progress in systematic ways
  4. that the tight loop of feedback is more personalized
  5. that choosing challenging goals engages people

Implicit in this is that you do  good goal setting and rewards. You have to have some good alignment to get these points across.  He made the point that doing it badly could be worse than not doing it at all!

With these ground rules, we can think about when it might make sense.  I’ll argue that one obvious, and probably sad case, would be when you don’t have a coherent organization, and people aren’t aware of their role in the organization.  Making up for effective communication isn’t necessarily a good thing, in my mind.

I think it also might make sense for a fun diversion to achieve a short-term goal. This might be particularly useful for an organizational change, when extra motivation could be of assistance in supporting new behaviors. (Say, for moving to a coherent organization. ;) Or some periodic event, supporting say a  philanthropic commitment related to the organization.

And it can be a reward for a desired behavior, such as my frequent flier points.  I collect them, hoping to spend them. I resent it, a bit, because it’s never as good as is promised, which is a worry.  Which means it’s not being done well.

On the other hand, I can’t see using it on an ongoing basis, as it seems it would undermine the intrinsic motivation of doing meaningful work.  Making up for a lack of meaningful work would be a bad thing, too.

So, I recall talking to a guy many moons ago who was an expert in motivation for the workplace. And I had the opportunity to see the staggering amount of stuff available to orgs to reward behavior (largely sales) at an exhibit happening next to our event. It’s clear I’m not an expert, but while I’ll stick to my guns about preferring intrinsic motivation, I’m quite willing to believe that there are times it works, including on me.

Ok, those are my thoughts, what’ve I missed?

The magic question

23 February 2016 by Clark Leave a Comment

A number of years ago, I wrote a paper about design, some of the barriers our cognitive architecture provides, and some heuristics I used to get around them.  I wrote a summary of the paper as four posts, starting here.  I was reminded of one of the heuristics in a conversation, and had a slightly deeper realization that of course I wanted to share.

The approach, which I then called ‘no-limits’ design, has to do with looking at what solution you’d develop if you had no limits. I now think of it as the ‘magic’ approach.  As I mentioned in the post series, this approach asks what you’d design if you had magic (and referred to the famous Arthur C. Clarke quote). And while I indicated one in the past, I think there are two benefits to this approach.

First, if you consider what you’d do if you have magic, you can  help prevent a common problem, premature convergence. Our cognitive architecture has weaknesses, and a couple of them revolve around solving problems in known ways and using tools in familiar ways.  It’s too easy to subconsciously rule out new options.  By asking the ‘magic’ question, we ask ourselves to step outside what we’ve known and believe is possible, and consider the options we’d have if we didn’t have the technological limitations.

Similarly,  using the notion of ‘magic’ can help us explore other models for accomplishing the goal. If design is not just evolutionary, but you also want to explore the opportunities to revolutionize, you need  some way to spark new thinking.  The ability to remove the limitations and explore the core goals facilitates that.

Using this at the wrong time, however, could be problematic. You may have already constrained your thinking too far.  If you consider the design process to be a clear identification of the problem (including the type of design thinking analysis that includes  ethnographic approaches) before looking for solutions, and then considering a wide variety of input data about solutions including other approaches already tried, you’d want this to come after the problem identification but before any other solutions to explore.

Pragmatically, per my previous post, you want to think about your design processes from a point of view of leverage. Having worked through several efforts to improve design with partners and clients, there are clear leverage points that give you the maximum impact on the quality of the learning outcome (e.g. how ‘serious‘ your solution is) for the minimal impact. There are many more small steps that can be integrated that will improve your outcomes, so it helps to look at the process and consider improvement opportunities.  So, are you ready to ask the ‘magic’ question?

Litmos Guest Blog Series

16 February 2016 by Clark Leave a Comment

As I did with Learnnovators, with Litmos I’ve also done a series of posts, in this case a year’s worth.  Unlike the other series, which was focused on deeper eLearning design, they’re not linked thematically and instead cover a wide range of topics that were mutually agreed as being personally interesting and of interest to their argument.

So, we have presentations on:

  1. Blending learning
  2. Performance Support
  3. mLearning: Part 1 and Part 2
  4. Advanced Instructional Design
  5. Games and Gamification
  6. Courses in the  Ecosystem
  7. L&D  and  the Bigger Picture
  8. Measurement
  9. Reviewing Design Processes
  10. New Learning Technologies
  11. Collaboration
  12. Meta-Learning

If any of these topics are of interest, I welcome you to check them out.

 

Badass

10 February 2016 by Clark 1 Comment

That’s the actual title of a book, not me being a bit irreverent.  I’ve been a fan of Kathy Sierra’s since I came across her work, e.g. I  regularly refer to how  she expresses ‘incrementalism‘. She’s on top of usability and learning in very important ways. And she’s got a new book out that I was pleased to read:  Badass: Making Users  Awesome.  So why do I like it?  Because it elegantly intermixes both learning and usability to talk about how to do design right (which I care about; I used to teach interface design besides my focus on learning design), but more importantly that the lessons invoked also apply to learning.

So what’s she doing differently?  She’s taking product design beyond marketing and beyond customer desires.  The premise of the book is that it’s not about the user and not about the product, it’s about the two together making the user more capable in ways they care about. Your audience  should be saying “Look at what I can do”  because of the product, not “I love this product”. This, she argues cogently, is valuable; it trumps just branding, and instead building customer loyalty as an intrinsic outcome of the experience they have.

The argument starts with making the case that it’s about what user goals are, and then figuring out how to get there in ways that systematically develop users’  capability while managing their expectations. Along the way, she talks about being clear on what will occur, and giving them small wins along the way.  And she nicely lays out learning science and motivation research as practical implications.

While she’s more focused on developing complex products with interfaces that remove barriers like cognitive load, and provide incremental capability, this applies to learning as well. We want to get learners to new capabilities in steps that maintain motivation and prevent drop-off. She gets into issues like intermediate skills and how to develop them in ways that optimize outcomes, which is directly relevant to learning design. She cites a wide variety of people in her acknowledgements, include Julie Dirksen and Jane Bozarth in our space, so you know she’s tracking the right folks.

It’s an easy read, too. It’s unusual, paperback but on weighty paper supporting her colorful graphics that illustrate her every point.  There’s at least an equal balance of prose and images if not more on the latter side.  While not focused specifically on learning design, it includes a lot of that but also covers performance support and more in an integrated format that resonates with an overall perspective on a performance ecosystem.

While perhaps not as fundamental as Don Norman’s Design of Everyday Things (which she references; and everyone who designs for anyone else needs to read), it’s a valuable addition to those who want to help people achieve their goals, and that includes product designers, interface designers, and learning experience designers.  If you’re designing a solution for others, whether a mobile app, an authoring tool, a LMS, or other, you  do  need this. If you’re designing learning, you  probably need this. And if you’re designing learning as a business (e.g. designing learning for commercial consumption), I highly recommend giving this a read.

Reactivating Learning

27 January 2016 by Clark Leave a Comment

(I looked  because I’m sure I’ve talked about this before, but apparently not a full post, so here we go.)

If we want our learning to stick, it needs to be spaced out over time. But what sorts of things will accomplish this?  I like to think of three types, all different forms of reactivating learning.

Reactivating learning is important. At a neural level, we’re generating  patterns of activation in conjunction, which strengthens the relationships between these patterns, increasing the likelihood that they’ll get activated when relevant. That’s why context helps as well as concept (e.g. don’t just provide abstract knowledge).  And I’ll suggest there are 3 major categories of reactivation to consider:

Reconceptualization: here we’re talking about presenting a different conceptual model that explains the same phenomena.  Particularly if the learners have had some meaningful activity from your initial learning or through their work, showing a different way of thinking about the problem is helpful. I like to link it to Rand Spiro’s Cognitive Flexibility Theory, and explain that having more ways to represent the underlying model provides more ways to understand the concept to begin with, a greater likelihood that one of the representations will get activated when there’s a problem to be solved, and will activate the other model(s) so there’s a greater likelihood of finding one that leads to a solution.  So, you might think of electrical circuits like water flowing in pipes, or think about electron flow, and either could be useful.  It can be as simple as a new diagram, animation, or just a small prose recitation.

Recontextualization: here we’re showing another example. We’re showing how the concept plays out in a new context, and this gives a greater base upon which to abstract from and comprehend the underlying principle, and providing a new reference that might match a situation they could actually see.   To process it, you’re reactivating the concept representation, comprehending the context, and observing how the concept was used to generate a solution to this situation.  A good example, with a challenging situation that the learner recognizes, a clear goal, and cognitive annotation showing the underlying thinking, will serve to strengthen the learning.  A graphic novel format would be fun, or story, or video, anything that captures the story, thinking, and outcome would work.

Reapplication: this is the best, where instead of consuming a concept model or an example, we actually provide a new practice problem. This should require retrieving the underlying concept, comprehending the context, and determining how the model predicts what will happen to particular perturbations and figuring out which will lead to the desired outcomes.  Practice makes perfect, as they say, and so this should ideally be the emphasis in reactivation.  It might be as simple as a multiple-choice question, though a scenario in many instances would be better, and a sim/game would of course be outstanding.

All of these serve as reactivation. Reactivation, as I’ve pointed out, is a necessary part of learning.  When you don’t have enough chance to practice in the workplace, but it’s important that you have the ability when you need it (and try to avoid putting it in the head if you can), reactivation is a critical tool in your arsenal.

Performance Detective

19 January 2016 by Clark Leave a Comment

I was on a case. I’m a performance detective, and that’s what I do.  Someone wasn’t performing they way they were supposed to, and it was my job to figure out why. My client thought he knew. They always do.  But I had to figure it out myself.  Like always.

Before I hit the bricks, I hit the books. Look, there’s no point watching anyone if you don’t  know what you’re looking for.  What’s this mug supposed to be doing?  So I read up. What’s the job?  What’s the goal?  How do you know when it’s going well? These are questions, and I need answers. So I check it out.  Even better, if I can find numbers.  Can’t always, as some folks don’t really get the value.  Suckers.

Then I had to get a move on.  You need what you find from the background, but you can’t trust it.  There could be  many reasons why this palooka isn’t  up to scratch. Everyone wants to throw a course at it.  And that may not the problem.  If it isn’t a skill problem, it’s not likely a course is going to help.  You’re wasting money.

The mug might  not believe it’s important. Or not want to do it a particular way. There’re lots of reasons not do it the way someone wants. It could be harder, with no obvious benefit.  If you don’t make it clear, why would they?  People aren’t always dumb, it just seems that way.

Or they might not have what they need.  Too often, some well-intentioned but under-aware designers wants to put some arbitrary information in their heads.  Which is hard. And usually worthless.  Put in the world. Have it to hand.  They may need a tool, not a knowledge dump.

Or, indeed, they may not be capable. A course could be the answer. Not just a course, of course. It needs more. Coaching, and practice. Lots of practice.  They may really be out of their depth, and dumping knowledge on them is only going to keep them drowning.

It’s not always easy. It may not be a simple answer. There can be multiple problems. It can be all of the above.  Or any combination. And that’s why they bring me in. To get the right answer, not the easy answer. And certainly not the wrong answer.

So I had to go find out what was really going on.  That’s what detectives do. They watch. They investigate. They study.  That’s what I do. I want the straight dope. If you can’t do the hard yards, you’re in the wrong job.  I love the job. And I’m good at it.

So I watched. And sure enough, there it was. Obvious, really. In retrospect. But you wouldn’t have figured it out if you hadn’t looked.  It’s not my job to fix it.  I told the client what I found.  That’s it.  Not my circus, not my monkeys. Get an architect to come up with a solution. I find the problem, and report. That’s what I do.

This quite literally came from a dream I  had, and my subsequent thoughts when I woke up.  And when I first conceived it, I wasn’t thinking about the role that Charles Jennings, Jos Arets, and Vivian Heijnen have as one of  five in their new 70:20:10 book, but  there is a nice resonance.  Hopefully my ‘hard boiled’ prose isn’t too ‘on the nose’!  More importantly, what did I miss? I welcome  your thoughts and feedback.

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.