Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Accreditation and Compliance Craziness

8 September 2015 by Clark 3 Comments

A continued bane of my existence is the ongoing requirements that are put in place for a variety of things.  Two in particular are related and worth noting: accreditation and compliance.  The way they’re typically construed is barking mad, and we can (and need to) do better.

To start with accreditation. It sounds like a good thing: to make sure that someone issuing some sort of certification has in place the proper procedures.  And, done rightly, it would be. However, what we currently see is that, basically, the body says you have to take what the Subject Matter Expert (SME) says as the gospel. And this is problematic.

The root of the problem is that SMEs don’t have access to around 70% of what they do, as research at the University of Southern California’s Cognitive Technology group has documented. However, of course, they have access to all they ‘know’.  So it’s easy for them to say what learners should know, but not what learners  actually should be able to do.  And some experts are better than others at articulating this, but the process is opaque to this nuance.

So unless the certification process is willing to allow the issuing institution the flexibility to use a process to drill down into the actual ‘do’, you’re going to get knowledge-focused courses that don’t actually achieve important outcomes. You could do things like  incorporating those who depend on the practitioners, and/or using a replicable and grounded process with SMEs that helps them work out what the core objectives need to be; meaningful ones, ala competencies. And a shoutout to Western Governors University for somehow being accredited using competencies!

Compliance is, arguably, worse.  Somehow, the amount of  time you spend is the important determining factor. Not what you can do at the end, but instead that you’ve done  something  for an hour.  The notion that amount of time spent relates to ability at this level of granularity is outright maniacal.  Time would matter, differently for different folks, but you have to be doing the right thing, and there’s no stricture for that.   Instead, if you’ve been subjected to an hour of information, that somehow is going to change your behavior. As if.

Again, competencies would make sense.  Determine what you need them to be able to do, and then assess that. If it takes them 30 minutes, that’s OK. If it takes them 5 hours, well, it’s necessary to be compliant.

I’d like to be wrong, but I’ve seen personal instances of both of these, working with clients. I’d really like to find a point of leverage to address this.  How can we start having processes that obtain necessary skills, and then use those to determine ability, not time or arbitrary authority!  Where can we start to make this necessary change?

3 C’s of Engaging Practice

26 August 2015 by Clark Leave a Comment

In thinking through what makes experiences engaging, and in particular making practice engaging, I riffed on some core elements.   The three terms  I came up with were  Challenge, Choices, & Consequences. And I realized I had a nice little alliteration going, so I’m going to elaborate and see if it makes sense to me (and you).

In general, good practice is having the learner make decisions in context. This has to be more than just recognizing the correct knowledge option, and providing a ‘right’ or ‘wrong’ feedback.  The right decision has to be made, in a plausible situation with plausible alternatives, and the right feedback has to be provided.

So, the first thing is, there has to be a situation that the learner ‘gets’ is important. It’s meaningful to them and to their stakeholders, and they want to get it right. It has to be clear there’s a real decision that has outcomes that are important.    And the difficulty  has to be adjusted to their level of ability. If it’s too easy, they’re bored and little  learning occurs. If it’s too difficult, it’s frustrating and again little learning occurs.  However, with a meaningful story and the right level of difficulty, we have the appropriate  challenge.  

Then, we have to have the right alternatives to select from. Some of the challenge comes from having a real decision where you can recognize that making the wrong choice would be problematic. But the alternatives must require an appropriate level of discrimination.  Alternatives that are so obvious or silly that they can be ruled out aren’t going to lead to any learning. Instead, they need to be ways learners reliably go wrong, representing misconceptions. The benefits are several: first, you can find out what they really know (or don’t), and you have the chance to address them. Also, this assists in having the right level of challenge.  So  you must have the right  choices.

Finally, once the choice is made, you need to have feedback. Rather than immediately have some external voice opine ‘yes’ or ‘no’, let the learner see the consequences of that choice. This is important for two  reasons. For one, it closes the emotional experience, as you see what happens, wrapping up the experience. Second, it shows how things work in the world, exposing the causal relationships and assists the learner understanding. Then you can provide feedback (or not, if you’re embedding this single decision in a scenario or game where other choices are precipitated by this choice). So, the final element are  consequences.

While this isn’t complete, I think it’s a nice shorthand to guide the design of meaningful and engaging practice. What do you think?

Concrete and Contextual

19 August 2015 by Clark 3 Comments

I’m working on the learning science workshop I’m going to present at DevLearn  next month, and in thinking about how to represent the implications of designing to account for how we work better when the learning context is concrete and sufficient contexts are used, I came up with this, which I wanted to share.

Concrete deliverables and multiple contextsThe empirical data is that we learn better when our learning practice is contextualized.  And if we want transfer, we should have practice in a spread of contexts that will facilitate abstraction and application to all appropriate settings, not just the ones seen in the learning experience.  If the space between our learning applications is too narrow, so too will our transfer be. So our activities need to be spread about in a variety of contexts (and we should be having sufficient practice).

Then, for each activity, we should have a concrete outcome we’re looking for. Ideally, the learner is given a concrete deliverable as an outcome that they must produce (that mimics the type of outcome we’re expecting them to be able to create as an outcome of the learning, whether decision, work product, or..).  Ideally we’re in a social situation and they’re working as a team (or not) and the work can be circulated for peer review.  Regardless, then there should be expert oversight on feedback.

With a focus on sufficient and meaningful practice, we’re more likely to design learning that will actually have an impact.  The goal  is to have practice that is aligned with how our learning works (my current theme: aligning with how we think, work, and learn). Make sense?

Where in the world is…

18 August 2015 by Clark Leave a Comment

It’s time for another game of Where’s Clark?  As usual, I’ll be somewhat peripatetic this fall, but more broadly scoped than usual:

  • First I’ll be hitting Shenzhen, China at the end of August  to talk advanced mlearning  for a private event.
  • Then I’ll be hitting the always excellent  DevLearn  in Las Vegas at the end of September to run a workshop on learning science for design (you  should want to attend!) and give a session on content engineering.
  • At the end of October I’m down under  at the Learning@Work event in Sydney to talk the Revolution.
  • At the beginning of November I’ll be at LearnTech Asia in Singapore, with an impressive lineup of fellow speakers to again sing the praises of reforming L&D.
  • That might seem like enough, but I’ll also be at Online Educa in Berlin at the beginning of December running an mlearning for academia workshop and seeing my ITA colleagues.

Yes, it’s quite the whirl, but with this itinerary I should be somewhere near you almost anywhere you are in the world. (Or engage me to show up at your locale!) I hope to see  you at one event or another  before the year is out.

 

Designing Learning Like Professionals

12 August 2015 by Clark 4 Comments

I’m increasingly realizing that the ways we design and develop content are part of the reason why we’re not getting the respect we deserve.  Our brains are arguably the most complex things in the known universe, yet we don’t treat our discipline as the science it is.  We need to start combining experience design with learning engineering to really start delivering solutions.

To truly design learning, we need to understand learning science.  And this does  not mean paying attention to so-called ‘brain science’. There is legitimate brain science (c.f. Medina, Willingham), and then there’s a lot of smoke.

For instance, there’re sound cognitive reasons why information dump and knowledge test won’t lead to learning.  Information that’s not applied doesn’t stick, and application that’s not sufficient doesn’t stick. And it won’t transfer well if you don’t have appropriate contexts across examples and practice.  The list goes on.

What it takes is understanding our brains: the different components, the processes, how learning proceeds, and what interferes.  And we need to look at the right levels; lots of neuroscience is  not relevant at the higher level where our thinking happens.  And much about that is still under debate (just google ‘consciousness‘ :).

What we do have are robust theories about learning that pretty comprehensively integrate the empirical data.  More importantly, we have lots of ‘take home’ lessons about what does, and doesn’t work.  But just following a template isn’t sufficient.  There are gaps where have to use our best inferences based upon models to fill in.

The point I’m trying to make is that we have to stop treating designing learning as something anyone can do.  The notion that we can have tools that make it so anyone can design learning has to be squelched. We need to go back to taking pride in our work, and designing learning that matches how our brains work. Otherwise, we are guilty of malpractice. So please,  please, start designing in coherence with what we know about how people learn.

If you’re interested in learning more, I’ll be running a learning science for design workshop at DevLearn, and would love to see you there.

Engagement

21 July 2015 by Clark 2 Comments

I had the occasion last week to attend a day of ComicCon. If you don’t know it, it is a conference about comics, but also much, much, more. It covers movies and television, games (computers and board), and more. It is also a pop culture phenomenon, where new releases are announced, analysis and discussion occur, and people dress up.   And it is huge!

I have gone to many conferences, and some are big, e.g. ATD’s ICE or Online Educa, or Learning Technology (certainly the exhibit hall).   This made the biggest of those seem like a rounding error.   It’s more like the SuperBowl.   People camp out in line to attend the best panels, and the exhibit hall is so packed that you can hardly move.   The conference itself is so big that it maxes out the San Diego Convention Center and spills out into adjoining hotels.

And that is really the lesson: something here is generating mad passion.   Such overwhelming interest that there’s a lottery for tickets! I attended once in the very early days, when it was small and cozy (as a college student), but this is something else.   I haven’t been to the Oscars, but this is bigger than what’s shown on TV.   It’s bigger than E3. Again, I haven’t seen CES since the very early days, but it can’t be much larger. And this isn’t for biz, this is for the people and their own hard earned dollars.   In designing learning, we would love to achieve such motivation.   So what’s going on?

So first, comics tap into some cultural touchstone; they appear in most (if not all) cultures that have developed mass media.   They tell ongoing stories that resonate with individuals, and drive other media including (as mentioned) movies, TV, games, and toys.   They can convey drama or comedy, and comment on the human condition with insight and heart. The best are truly works of art (oh, Bill Watterson, how could you stop?).

They use the standard methods of storytelling, strip away unnecessary details, have (even unlikely) heroes and villains, obstacles and triumphs). And they can convey powerful lessons about values and consequences.   Things we often are trying to achieve. It’s done through complex characters, compelling narratives, and stylistic artwork.   As Hilary Price (author of the comic Rhymes with Orange) told us in a panel, she’s a writer first and an artist second.

We don’t use graphic novel/comic/cartoon formats near enough in learning, and we could and should. Similarly with games, the interactive equivalent, for meaningful practice.   I fear we take ourselves too seriously, or let stakeholders keep us from truly engaging our learners. We can and should do better.  We need to understand audience engagement, and leverage that in our learning experiences.  To restate: it’s not about content, it’s about experience. Are you designing experiences?

Emergent experience?

8 July 2015 by Clark 1 Comment

So I was reading something that talked about designed versus emergent experiences.  Certainly we have familiarity with  designed experiences: courses/training, film, theater, amusement parks. Yet emergent experiences seem like they’d have some unique outcomes and consequently could be more valuable and memorable.  So  I wondered how  an emergent experience might play out to reliably generate a good experience, regardless.

The issue is that designed experiences, e.g. a Disney ride, are predictable.  You can repeat them and notice new things, yet the experience is largely the same.  And there can be brilliant minds behind them, and great outcomes including learning.  But could and should we shoot higher?

What emergent experiences do we know?  Emergent means having to interact with something unpredictable and perhaps even reactive. It could be interacting with systems, or it could be interpersonal interaction.  So, what we see in clouds, and experiences we have with games,  and certainly interpersonal experiences can be emergent.  Can they repeatedly have desired outcomes as well as unpredictable ones?

I think the answer is yes if you allow for the role of some ‘interference’.  That is, someone playing a role in controlling the outcomes.  This is what happens in Dungeons and Dragons games where there is a Dungeon Master, or in Alternate Reality Game where there’s a Puppet Master, or  in social learning where an instructor is structuring group assignments.

I’m interested in the latter, and the blend between.  I propose that our desired learning experiences should go beyond fixed designs, as our limitations as designers and SMEs will constrain what outcomes we achieve.  They may be good, but what can happen when people interact with each other, and rich systems, allows for more self discovery and ownership.  An alternative to social interaction would be practice set in a simulation that’s richer and with some randomness that mimics the variations seen in the real world that go beyond our specific designs.

By creating this richness through interpersonal interaction via dialogue and different viewpoints, or through simulations, we create experiences that go beyond our limitations in specific design.  It certainly may go beyond our resources: branching scenarios and asynchronous independent learning are understandably more pragmatic, but when we can, and when the learning outcomes we need are richer than we can suitably address in a direct fashion, say when we need flexible adaptation to circumstances, we should consider designing emergent experiences.  And I’m inclined to think that social learning is the cheaper way to go than a complex system-generated experience.

I’m just thinking out loud here, a tangent sparked by a juxtaposition, part of my ongoing efforts to make sense of the world and apply that to creating more resilient and successful organizations. Based upon the above, I think emergent experiences can create more adaptable and flexible learning, and I think that’s increasingly needed. I welcome your thoughts, reflections, pointers, disagreements, and more.

 

SME Brains

30 June 2015 by Clark 1 Comment

As I push for better learning design, I’m regularly reminded that working with subject matter experts (SMEs) is critical, and problematic.   What makes SMEs has implications that are challenging but also offers a uniquely valuable perspective.    I want to review some of those challenges and opportunities in one go.

One of the artifacts about how our brain works is that we compile knowledge away.  We start off with conscious awareness of what we’re supposed to be doing, and apply it in context.  As we practice, however, our expertise becomes chunked up, and increasingly automatic. As it does so, some of the elements that are compiled away are awarenesses that are not available to conscious inspection. As Richard Clark of the Cognitive Technology Lab at USC lets us know, about 70% of what SMEs do isn’t available to their conscious mind.  Or, to put it another way, they literally can’t tell us what they do!

On the other hand, they have pretty good access to what they know. They can cite all the knowledge they have to hand. They can talk about the facts and the concepts, but not the decisions.  And, to be fair, many of them aren’t really good at the concepts, at least  not from the perspective of being able to articulate a model that is of use in the learning process.

The problem then becomes a combination of both finding a good SME, and working with them in a useful way to get meaningful objectives, to start. And while there are quite rigorous ways (e.g. Cognitive Task Analysis), in general we need more heuristic approaches.

My recommendation, grounded in Sid Meier’s statement that “good games are a series of interesting decisions” and the recognition that making better decisions are likely to be the most valuable outcome of learning, is to focus rabidly on decisions.  When SMEs start talking about “they need to know X” and “they need to know Y” is to ask leading questions like “what decisions do they need to be able to make that they don’t make know” and “how does X or Y actually lead them to make better decisions”.

Your end goal here is to winnow the knowledge away and get to the models that will make a difference to the learner’s ability to act.  And when you’re pressed by a certification body that you need to represent what the SME tells you, you may need to push back.  I even advocate anticipating what the models and decisions are likely to be, and getting the SME to criticize and improve, rather than let them start with a blank slate. This does require some smarts on the part of the designer, but when it works, it leverages the fact that it’s easier to critique than generate.

They also are potentially valuable in the ways that they recognize where learners go wrong, particularly if they train.  Most of the time, mistakes aren’t random, but are based upon some inappropriate models.  Ideally, you have access to these reliable mistakes,  and the reason why they’re made. Your SMEs should be able to help here. They should know ways in which non-experts fail.  It may be the case that some SMEs aren’t as good as others here, so again, as in ones that have access to the models, you need to be selective.

This is related to one of the two ways SMEs are your ally.  Ideally, you’re equipped with stories, great failures and great successes. These form the basis of your examples, and ideally come in the form of a story. A SME should have some examples of both that they can spin and you can use to build up an example. This may well be part of your process to get the concepts and practice down, but you need to get these case studies.

There’s one other way that SMEs can help. The fact that they are experts is based upon the fact that they somehow find the topic fascinating or rewarding enough to spend the requisite time to acquire expertise. You can, and should, tap into that. Find out what makes this particular field interesting, and use that as a way  to communicate the intrinsic interest to learners. Are they playing detective, problem-solver, or protector? What’s the appeal, and then build that into the practice stories you ask learners to engage in.

Working with SMEs isn’t easy, but it is critical. Understanding what they can do, and where they intrinsic barriers, gives you a better handle on being able to get what you need to assist learners in being able to perform.  Here are some of my tips, what have you found that works?

Content/Practice Ratio?

9 June 2015 by Clark 7 Comments

I end up seeing a lot of different elearning. And, I have to say, despite my frequent disparagement, it’s usually well-written, the problem seems to be in the starting objectives.  But compared to learning that really has an impact: medical, flight, or military training for instance, it seems woefully under-practiced.

So, I’d roughly (and generously) estimate that the ratio is around 80:20 for content: practice.  And, in the context of moving from ‘getting it right’ to ‘not getting it wrong’, that seems woefully inadequate.  So, two questions: do we just need more practice, or do we also have too much content. I’ll put my money on the latter, that is: both.

To start, in most of the elearning I see  (even stuff I’ve had a role in, for reasons out of my control), the practice isn’t enough.  Of course, it’s largely wrong, being focused on reciting knowledge as opposed to making decisisions, but there just isn’t enough.  That’s ok  if you know they’ll be applying it right away, but that usually isn’t the case.  We really don’t scaffold the learner from their initial capability, through more and more complex scenarios, until they’re at the level of ability we want.  Where they’re performing the decisions they need to be making in the workplace with enough flexibility and confidence, and with sufficient retention until it’s actually needed.  Of course, it  shouldn’t be the event model, and that practice should be spaced over time.  Yes, designing practice is harder than just delivering content, but it’s not that much harder to develop more than just to develop some.

However, I’ll argue we’re also delivering too much content.  I’ve suggested in the past that I can rewrite most content to be 40% – 60% less than it starts (including my own; it takes me two passes).  Learners appreciate it.  We want a concise model, and some streamlined examples, but then we should get them practicing.  And then let the practice drive them to the content.  You don’t have to prepackage it as much, either; you can give them some source materials that they’ll be motivated to use, and even some guidance (read: job aids) on how to perform.

And, yes, this is a tradeoff: how do we find a balance that both yields the outcomes we need but doesn’t blow out the budget?  It’s an issue, but I suggest that, once you get in the habit, it’s not that much more costly.  And it’s much more justifiable,  when you get to the point of actually measuring your impact.  Which many orgs aren’t doing yet.  And, of course, we should.

The point is that I think our ratio should really be 50:50 if not 20:80 for content:practice.  That’s if it matters, but if it doesn’t why are you bothering? And if it does, shouldn’t it be done right?  What ratios do you see? And what ratios do  you think makes sense?

Model responses

2 June 2015 by Clark Leave a Comment

I was thinking about how to make meaningful practice, and I had a thought that was tied to some previous work that I may not have shared here.  So allow me to do that now.

Ideally, our practice has us performing in ways that are like the ways we perform in the real world.  While it is possible to make alternatives available that represent different decisions, sometimes there are nuances that require us to respond in richer ways. I’m talking about things like writing up an RFP, or a response letter, or creating a presentation, or responding to a live query. And while these are desirable things, they’re hard to evaluate.

The problem is that our technology to evaluate freeform text is difficult, let alone anything more complex.  While there are tools like latent semantic analysis that can be developed to read text, it’s complex to develop and  it won’t work on spoken responses , let alone spreadsheets or slide decks (common forms of business communication).  Ideally, people would evaluate them, but that’s not a very scalable solution if you’re talking about mentors, and even peer review can be challenging for asynchronous learning.

An alternative is to have the learner evaluate themselves.  We did this in a course on speaking, where learners ultimately dialed into an answering machine, listened to a question, and then spoke their responses.  What they then could do was listen to a model response as well as their response.  Further, we could provide a guide, an evaluation rubric, to guide  the learner in evaluating their response  in respect to the model response (e.g. “did you remember to include a statement and examples”?).

This would work with more complex items, too.  “Here’s a model spreadsheet (or slide deck, or document); how does it compare to yours?”  This is very similar to the types of social processing you’d get in a group, where you see how someone else responded to the assignment, and then evaluate.

This isn’t something you’d likely do straight off; you’d probably scaffold the learning with simple tasks first.  For instance, in the example I’m talking about we first had them recognize well- and poorly-structured responses, then create them from components, and finally create them in text before having them call into the answering machine. Even then, they first responded to questions they knew they were going to get before tasks where they didn’t know the questions.  But this approach serves as an enriching practice on the way to live performance.

There is another benefit besides allowing the learner to practice in richer ways and still get feedback. In the process of evaluating the model response and using an evaluation rubric, the learner internalizes the criteria and the process of evaluation, becoming a self-evaluator and consequently a self-improving learner.  That is, they use a rubric to evaluate their response and the model response. As they go forward, that rubric can serve to continue to guide as they move out into a performance situation.

There are times where this may be problematic, but increasingly we can and should mix media and use technology to help us close the gap between the learning practice and the performance context. We can prompt, record learner answers, and then play back theirs and the model response with an evaluation guide.  Or we can give them a document template and criteria, take their response, and ask them to evaluate theirs and another, again with a rubric.  This is richer practice and helps shift the learning burden to the learner, helping them  become self-learners.   I reckon it’s a good thing. I’ll suggest that you  consider this as another tool in your repertoire of ways to create meaningful practice. What do you think?

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.