Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Align, deepen, and space

8 July 2014 by Clark 1 Comment

I was asked about, in regards to the Serious eLearning Manifesto, about how people could begin to realize the potential of eLearning.  I riffed about this once before, but I want to spin it a different way.  The key is making meaningful practice.  And there are three components: align it, deepen it, and space it.

First, align it. What do I mean here?  I mean make sure that your learning objective, what they’re learning, is aligned to a real change in the business. Something you know that, if they improve, it will have an impact on a measurable business outcome.  This means two things, underneath. First, it has to be something that, if people do differently and better, it will solve a problem in what the organization is trying to do.  Second, it has to be something learning benefits from.  If it’s not a case where it’s a cognitive skill shift, it should be about using a tool, or replaced with using a tool. Only use a course when a course makes sense, and make sure that course is addressing a real need.

Second, deepen it.  Abstract practice, and knowledge test are both less effective than practice that puts the learner in a context like they’ll be facing in the workplace, and having them make the same decisions they’ll need to be making after the learning experience.  Contextualize it, and exaggerate the context (in appropriate ways) to raise the level of interest and importance to be closer to the level of engagement that will be involved in live performance.  Make sure that the challenge is sufficient, too, by having alternatives that are seductive unless you really understand. Reliable misconceptions are great distractors, by the way.  And have sufficient practice that leads from their beginning ability to the final ability they need to have, and so that they can’t get it wrong (not just until they get it right; that’s amateur hour).

Here’s where the third, space it, can come in.  Will Thalheimer has written a superb document (PDF) explaining the need for spacing. You can space out the complexity of development, and sufficient practice, but we need to practice, rest (read: sleep), and then practice some more. Any meaningful learning really can’t be done in one go, but has to be spread.  How much? As Will explains, that depends on how complex the task is, and how often the task will be performed and the gaps in between, but it’s a fair bit. Which is why I say learning  should be expensive.

After these three steps, you’ll want to only include the resources that will lead to success, provide models and examples that will support success, etc, but I believe that, regardless,  learners with good practice are likely to get more out of the learning experience than any other action you can take. So start with good practice, please!

Karen McGrane #mLearnCon Keynote Mindmap

25 June 2014 by Clark Leave a Comment

Karen McGrane evangelized good content architecture (a topic near to my heart), in a witty and clear keynote. With amusing examples and quotes, she brought out just how key it is to move beyond hard wired, designed content and start working on rule-driven combinations from structured chunks. Great stuff!

20140625-095410-35650831.jpg

Larry Irving #mLearnCon Keynote Mindmap

24 June 2014 by Clark Leave a Comment

Larry Irving kicked off the mLearnCon with an inspiring talk about the ways in which technology can disrupt education. His ideas about VOOCs and nanodegrees were intriguing, and wish he’d talked more about adaptive learning. A great kickoff to the event.

20140624-100020-36020511.jpg

Curation trumps creation

18 June 2014 by Clark Leave a Comment

In the past, it has been the role of L&D to ascertain the resources necessary to supporting performance in the organization.  Finding the information, creating the resources, and making them available has often been a task that either results in training, or complements it. I want to suggest, however, that the time has changed and a new strategy may be more effective, at least in many instances.

Creating resources is hard.  We’ve seen the need to revisit the principles of learning design because despite the pleas that “we know this stuff already”, there are still too many bad elearning courses out there. Similarly with job aids, there are skills involved in doing it right.  Assuming those skills is a mistake.

There’s also the situation  that creating resources is time consuming. The time spent doing this may be better spent in other approaches.  There are plenty of needs that need to be addressed without finding more work.

On the flip side, there are now so many resources out there about so many things, that it’s not hard to find an answer.  Finding good answers, of course, is certainly more problematic than just finding  an  answer, but there are likely answers out there.

The integration here is to start curating resources, not creating them.  They might come internally, from the employees, or from external resources, but regardless of provenance, if it’s out there, it saves your resources for other endeavors.

The new mantra is Personal Knowledge Mastery, and while that’s for the individual, there’s a role for L&D here too: practicing ‘representative knowledge mastery’,  as well as fostering PKM for the workforce.  You should be monitoring feeds relevant to your role and those you’re responsible for facilitating.  You need to practice it to be able to preach it, and you should be preaching it.

The point is to not be recreating resources that can be found, conserving your energy for those things that are business critical.  One organization has suggested that they only create resources for internal culture, everything else is curated.  Certainly only proprietary material should be the focus.

So, curate over create. Create when you have to, but only then. Finding good answers is more efficient than generating them.

#itashare

From Content to Experience

3 June 2014 by Clark 1 Comment

A number of years ago, I said that the problem for publishers was not going from text to content (as the saying goes), but from content to experience.  I think elearning designers have the same problem: they are given a knowledge dump, and have to somehow transform that into an effective experience.  They may even have read the Serious eLearning Manifesto, and want to follow it, but struggle with the transition or transformation.  What’s a designer to do?

The problem is, designers will be told, “we need a course on this”, and given a dump of Powerpoints (PPTs), documents (PDFs), and maybe access to a subject matter expert (SME).  This is all about knowledge.  Even the SME, unless prompted carefully otherwise, will resort to telling you the knowledge they’ve learned, because they just don’t have access to what they know.  And this, by itself, isn’t a foundation for a course.  Processing the knowledge, comprehending it, presenting it, and then testing on acquisition (e.g. what rapid elearning tools make easy), isn’t going to lead to a meaningful outcome. Sorry, knowledge isn’t the same as ability to perform.

And this ignores, of course, whether this course is actually needed.  Has anyone checked to see that if the skills associated with this knowledge have a connection with a real workplace performance issue?  Is the performance need a result of a lack of skills?  And is this content aligned to that skill?  Too often folks will ask  for a course on X when the barrier is something else.  For instance, if the content is a bunch of knowledge that somehow you’re to magically put in someone’s head, such as product information or arbitrary rules, you’re far better off putting that information in the world than trying to put it in the head.  It’s really hard to get arbitrary information in the head.  But let’s assume that there is a core skill and workplace need for the sake of this discussion.

The key is determining what this knowledge actually supports  doing differently.  The designer needs to go through that content and figure out what individuals will be able to  do that they can’t do now (that’s important), and then develop practice doing that. This is so important that, if what they’ll be able to do differently, isn’t there, there should be push back.  While you can talk to the SME (trying to get them to talk in terms of decisions they can make instead of knowledge), you may be better off inferring the decisions and then verifying and refining with the SME.  If you have access to several SMEs, better yet get them in a room together and just facilitate until they come up with the core decisions, but there are many situations where that’s not feasible.

Once you have that key decision, the application of the skill in context, you need to create situations where learners can practice using it.  You need to create scenarios  where these decisions will play out. Even just better written multiple choice questions that have: story setting, situation precipitating decision, decision alternatives that are ways in which learners might go wrong,  consequences of the decisions, and feedback.  These practice attempts are the core of a meaningful learning experience. And there’s even evidence that putting problems up front or at core is a valuable practice.  You also want to have sufficient practice not just ’til they get it right, but until they have a high likelihood of not getting it wrong.

One thing that might not be in the PDFs and PPTs are examples.  It’s helpful to get colorful examples of someone using  information to successfully solve a problem, and also cases where they misapplied it and failed.  Your SME should be able to help you here, telling you engaging stories of wins and losses.  They may be somewhat resistant to the latter; worst case have them tell them about someone else.

The content in the PDFs and PPTs then gets winnowed down into just the resource material that helps the learner actually able to do the task, to successfully make the decision.  Consider having the practice set in a story, and the content is available through the story environment (e.g. casebooks on the shelves for examples, a ‘library’ for concepts).  But even if you present the (minimized) content and then have practice, you’ve shifted from knowledge dump/test to more of a flow of experience.  The suite  of  meaningful practice, contextualized well and made meaningful with a wee bit of exaggeration and careful alignment with learner’s awareness, is the essence of experience.

Yes, there’s a bit more to it than that, but this is the core: focus on  do, not dump.  And, once you get in the habit, it shouldn’t  take longer, it just takes a change in thinking.  And even if it does, the dump approach isn’t liable to  lead to any meaningful learning, so it’s a waste of time anyway.  So, create experiences, not content.

 

Setting Story

27 May 2014 by Clark Leave a Comment

I’ve been thinking about the deep challenge of motivating uninterested learners.  To me, at least part of that is making the learning of intrinsic interest.  And one of those elements is practice, and this is arguably the most important element to making learning work.  So how to do we make practice intrinsically interesting?

One of the challenging but important components of designing meaningful practice is choosing a context in which that practice is situated.  It’s really about finding a story line that makes the action meaningful to both the learner and the learning. It’s creative (and consequently fun), but it’s also not intrinsically obvious (which I’ve learned after trying to teach it in both game design and advanced ID workshops). There are heuristics to be followed (there’s no guaranteed formula except brainstorm, winnow, trial, and refine), however, that can be useful.

While Subject Matter Experts (SMEs) can be the bane of your existence while setting learning goals (they have conscious access to no more than 30% of what they do, so they tend to end up reciting what they know, which they do have access to),  they can be very useful when creating stories. There’s a reason why they’ve spent the requisite time to  become experts in the field, and that’s an aspect we can tap into. Find out  why it’s of interest to them.  In one instance, when asking experts about computer auditing, a colleague found that auditors found it like playing detective, tracking back to find the error.  It’s that sort of insight upon which a good game or practice exercise can hinge.

One of the tricks to work with SMEs is to talk about decisions.  I argue that what is most likely to make a difference to organizations is that people make better decisions, and I also believe that using the language of decisions helps SMEs focus on what they  do, not what they know.  Between your performance gap analysis of the situation, and expert insight into what decisions are key, you’re likely to find the key performances you want learners to practice.

You also want to find out all the ways learners go wrong.  Here you may well hear instructors and/or SMEs say “no matter what we do, they always…”. And that’s the things you want to know, because novices don’t tend to make random errors.  Yes, there’s some, owing to our cognitive architecture (it’s adaptive), which is why it’s bad to expect people to do rote things, but it’s a small fraction of mistakes.  Instead, learners make patterned mistakes based upon mistakes in their conceptualizations of the performance, aka misconceptions.  And  you want to trap those because you’ll have a chance to remediate them in the learning context. And they make the challenge more appropriately tuned.

You also need the consequences of both the right choice and the misconceptions. Even if it’s just a multiple choice question, you should show what the real world consequence is before providing the feedback about why it’s wrong. It’s also the key element in scenarios, and building models for serious games.

Then the  trick is to ask SMEs about all the different settings in which these decisions embed. Such decisions tend to travel in packs, which is why scenarios are better practice than simple multiple choice, just as scenario-based multiple choice trumps knowledge test.  Regardless, you want to contextualize those decisions, and knowing the different settings that  can be used gives you a greater palette to choose from.

Finally, you’ll want to decide how close you want the context to be to the real context.  For certain high-stakes and well-defined tasks, like flying planes or surgery, you’ll want them quite close to the real situation.  In other situations, where there’s more broad applicability and less intrinsic interest (perhaps accounting or project management), you may want a more fantastic setting that facilitates broader transfer.

Exaggeration is a key element. Knowing what to exaggerate and when is not yet a science, but the rule of thumb is leave the core decisions to be based upon the important variables, but the context can be raised to increase the importance.  For example, accounting might not be riveting but your job depends on it.  Raising the importance of the accounting decision in the learning experience will mimic the importance, so you might be accounting for a mob boss who’ll terminate your existence if you don’t terminate the discrepancy in his accounts!  Sometimes exaggeration can serve a pedagogical purpose as well, such as highlighting certain decisions that are rare in real life but really important when they occur. In one instance, we had asthma show up with a 50% frequency instead of the usual ~15%, as the respiratory complications that could occur required specific approaches to address.

Ultimately, you want to choose a setting in which to embed the decisions. Just making it abstract decreases the impact of the learning, and making it about knowledge, not decisions, will render it almost useless, except for those rare bits of knowledge that have to absolutely be in the head.  You want to be making decisions using models, not recalling specific facts. Facts are better off put in the world for reference, except where time is too critical. And that’s more rare than you’d expect.

This may seem like a lot of work, but it’s not that hard, with practice.  And the above is for critical decisions. In many cases, a good designer should be able to look at some content and infer what the  decisions involved  should be.  It’s a different design approach then transforming knowledge into tests, but it’s critical for learning.  Start working on your practice items first, aligned with meaningful objects, and the rest will flow. That’s my claim, what say you?

Getting contextual

21 May 2014 by Clark Leave a Comment

For the current ADL webinar series on mobile, I gave a presentation on contextualizing mobile in the larger picture of L&D (a natural extension of my most recent books).  And a question came up about whether I thought wearables constituted mobile.  Naturally my answer was yes, but I realized there’s a larger issue, one that gets meta as well as mobile.

So, I’ve argued that we should be looking at models for guiding our behavior.  That we should be creating them by abstracting from successful practices, we should be conceptualizing them, or adopting them from other areas.  A good model, with rich conceptual relationships, provides a basis for explaining what has happened, and predicting what will happen, giving us a basis for making decisions.  Which means they need to be as context-independent as possible.

WorkOppsSo, for instance, when I developed the mobile models I use, e.g. the 4C‘s and the applications of learning (see figure), I deliberately tried to create an understanding that would transcend the rapid changes that are characterizing mobile, and make them appropriately recontextualizable.

In the case of mobile, one of the unique opportunities is contextualization.  That means using information about where you are, when you are, which way you’re looking, temperature or barometric pressure, or even your own state: blood pressure, blood sugar, galvanic skin response, or whatever else skin sensors can detect.

To put that into context (see what I did there): with desktop learning, augmenting formal could be emails that provide new examples or practice that spread out over time. With a smartphone  you can do the same, but you could also have a localized information so that because of where you were you might get information related to a learning goal. With a wearable, you might get some information because of what you’re looking at (e.g. a translation or a connection to something else you know), or due to your state (too anxious, stop and wait ’til you calm down).

Similarly for performance support: with a smartphone you could take what comes through the camera and add it onto what shows on the screen; with glasses you could lay it on the visual field.  With a watch or a ring, you might have an audio narration.  And we’ve already seen how the accelerometers in fit bracelets can track your activity and put it in context for you.

Social can not only connect you to who you need to know, regardless of device or channel, but also signal you that someone’s near, detecting their face or voice, and clue you in that you’ve met this person before.  Or find someone that you should meet because you’re nearby.

All of the above are using contextual information to augment the other tasks you’re doing.  The point is that you map the technology to the need, and infer the possibilities.  Models are a better basis for elearning, too so that you teach transferable understandings (made concrete in practice) rather than specifics that can get outdated.  This is one of the elements we placed in the Serious eLearning Manifesto, of course.  They’re also useful for coaching & mentoring as well, as for problem-solving, innovating, and more.

Models are powerful tools for thinking, and good ones will support the broadest possible uses.  And that’s why I collect them, think in terms of them, create them, and most importantly, use them in my work.   I encourage you to ensure that you’re using models appropriately to guide you to new opportunities, solutions, and success.

Peeling the onion

15 May 2014 by Clark 2 Comments

I’ve been talking a bit recently about deepening formal design, specifically to achieve learning that’s flexible, persistent, and develops the learner’s abilities to become self-sustaining in work and life.  That is, not just for a course, but for a curriculum.  And it’s more than just what we talked about in the Serious eLearning Manifesto, though of course it starts there.    So, to begin with, it needs to start with meaningful objectives, provide related practice, and be trialed and developed, but there’s more, there are layers of development that wrap around the core.

One element I want to suggest is important is also in the Manifesto, but I want to push a bit deeper here.  I worked to put in that the elements behind, say, a procedure or a task, that you apply to problems, are models or concepts.  That is, a connected body of conceptual relationships that tie together your beliefs about why it should be done this way.  For example, if you’ve a procedure or process you want people to follow, there is (or should be) a  rationale  behind it.

And  you should help learners discover and see the relationships between the model and the steps, through examples and the feedback they get on practice.  If they can internalize the understanding behind steps, they are better prepared for the inevitable changes to the tools they use, the materials they work on, or the process changes what will come from innovation.  Training them on X, when X will ultimately shift to Y, isn’t as helpful unless you help them understand the principles that led to performance on X and will transfer to Y.

Another element is that the output of the activities should create scrutable deliverables  and  also annotate the thoughts behind the result.  These provide evidence of the thinking both implicit and explicit, a basis for mentors/instructors to understand what’s good, and what still may need to be addressed, tin the learner’s thinking.  There’s also the creation of a portfolio of work which belongs to the learner and can represent what they are capable of.

Of course, the choices of activities for the learner initially, and the design of them to make them engaging, by being meaningful to the learner in important ways, is another layer of sophistication in the design.  It can’t just be that you give the traditional boring problems, but instead the challenges need to be contextualized. More than that (which is already in the Manifesto), you want to use exaggeration and story to really make the challenges compelling.  Learning  should   be hard fun.

Another layer is that of 21st Century skills (for examples, the SCANS competencies).  These can’t be taught separately, they really need to manifest across whatever domain learnings you are doing. So you need learners to not just learn concepts, but apply those concepts to specific problems. And, in the requirements of the problem, you build in opportunities to problem-solve, communicate, collaborate, e.g. all the foundational and workplace skills. They need to reappear again and again and be assessed (and developed) separately.

Ultimately, you want the learner to be taking on responsibility themselves.  Later assignments should include the learner being given parameters and choosing appropriate deliverables and formats for communication.  And this requires and additional layer, a layer of annotation on the learning design. The learners need to be seeing  why the learning was so designed, so that they can internalize the principles of good design and so become self-improving learners. You, for example, in reading this far, have chosen to do this as part of your own learning, and hopefully it’s a worthwhile investment.  That’s the point; you want learners to continue to seek out challenges, and resources to succeed, as part of their ongoing self-development, and that comes by having seen learning design and been handed the keys at some point on the journey, with support that’s gradually faded.

The nuances of this are not trivial, but I want to suggest that they  are doable.  It’s a subtle interweaving, to be sure, but once you’ve got your mind around it (with scaffolded practice :), my claim is that it can be done, reliably and repeatedly.   And it should.  To do less is to miss some of the necessary elements for successful support of  an individual to become the capable and continually self-improving learner that we need.

I touched on most of this when I was talking about Activity-Based Learning, but it’s worthwhile to revisit it (at least for me :).

Facilitating Innovation

13 May 2014 by Clark 4 Comments

One of the things that emerged at the recent A(S)TD conference was that a particular gap might exist. While there are resources about learning design, performance support design, social networking, and more, there’s less guidance about facilitating innovation.  Which led me to think a wee bit about what might be involved.  Here’s a first take.

So, first, what are the elements of innovation?  Well, whether you  listen to Stephen Berlin Johnson on the story of innovation, or Keith Sawyer on ways to foster innovation, you’ll see that innovation isn’t individual.  In previous work, I looked at models of innovation, and found that either you mutated an existing design, or meld two designs together.  Regardless, it comes from working and playing well together.

The research suggests that you  need to make sure you are addressing the right problem, diverge on possible solutions via diverse teams under good process, create interim representations, test, refine, repeat.  The point being that the right folks need to work together over time.

The barriers are several.  For one, you need to get the cultural elements right: welcoming diversity, openness to new ideas, safe to contribute, and time for reflection.  Without being able to get the complementary inputs, and getting everyone to contribute, the likelihood of the best outcome is diminished.

You also shouldn’t take for granted that everyone knows how to work and play well together.  Someone may not be able to ask for help in effective ways, or perhaps more likely, others may offer input in ways that minimize the likelihood that they’ll be considered.  People may not use the right tools for the job, either not being aware of the full range (I see this all the time), or just have different ways of working. And folks may not know how to conduct brainstorming and problem-solving processes effectively  (I see this as well).

So, the facilitation role has many opportunities to increase the quality of the outcome.  Helping establish culture, first of all, is really important.  A second role would be to understand and promote the match of tools to need. This requires, by the way, staying on top of the available tools.  Being concrete about learning and problem-solving processes, and  educating them and looking for situations that need facilitation, is another role  Both starting up front and educating folks before these skills are needed are good, and then monitoring for opportunities to tune those skills are valuable.  Finally, developing process facilitation skills,  serving in that role or developing the skills, or both, are critical.

Innovation isn’t an event, it’s a process, and it’s something that I want P&D (Learning & Development 2.0 :) to be supporting. The organization needs it, and who better?

#itashare

What do elearning users say?

15 April 2014 by Clark 2 Comments

Towards Maturity is a UK-based but global initiative looking at organizations use of technology for learning.  While not as well known in the US, they’ve been conducting research benchmarking on what organizations are doing and trying to provide guidance as well.  I even put their model as an appendix in the forthcoming book on reforming L&D.  So I was intrigued to see the new report they have just released.

The report, a survey of 2000 folks in a variety of positions in organizations, asks what they think about elearning, in a variety of ways.  The report covers a variety of aspects of how people learn: when, where, how, and their opinion of elearning. The report is done in an appealing infographic-like style as well.

What intrigued me was the last section: are L&D teams tuned into the learner voice.  The results are indicative.  This section juxtaposes what the report heard from learners versus what L&D has reported in a previous study.  Picking out just a few:

  • 88% of staff like self-paced learning, but only 23% of L&D folks believe that learners have the necessary confidence
  • 84% are willing to share with social media, but only 18% of L&D believe their staff know how
  • 43% agree that mobile content is useful (or essential), but only 15% of L&D encourage mlearning

This is indicative of a big disconnect between L&D and the people they serve.  This is why we need the revolution!   There’s lots more interesting stuff in this report, so I strongly recommend you check it out.

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.