Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Quinnovation Fall 2016 Schedule

26 July 2016 by Clark Leave a Comment

My fall  schedule is coalescing, so I thought I’d provide pointers to when and where  I’ll be for the rest of this year:

I’m doing two  webinars for a government agency, one at the end of August, and one at the end of September.

I’ll be in Beijing running a mobile learning workshop on the 6th of September, and keynoting the CEFE conference on the 7th.

The week after I’ll be keynoting a private event in Connecticut.

And I’ll be delivering a virtual keynote for a different  government agency in November.

I’ll be running an elearning strategy (read: Revolution) workshop at DevLearn  in Las Vegas come mid-November,  and presenting on elearning myths.

Then, on the very last day  of November, I’ll be running an elearning design workshop at Online Educa in Berlin.

So, some availability   in late September through October, or mid-December, if you’d like access to Quinnovation as well.

I hope that if you’re near Beijing, Las Vegas, or Berlin, you’ll be attending. If so, say hi!

 

The wrong basis

20 July 2016 by Clark 1 Comment

Of late, I’ve been talking about the approach organizations take to learning.  It’s come up in presentations on learning design, measurement, and learning technology strategy.  And the point is simple: we’re not using the right basis.

What we’re supposed to be doing is empirically justifiable:

  • doing  investigations into the problem
  • identifying the root cause
  • mapping back to an intervention design
  • determining how we’ll know the intervention is working
  • implementing our intervention
  • testing to see if we’ve achieved the necessary outcome
  • and revising until we do

Instead, what we see is what I’ve begun to refer to as  ‘faith-based learning’: if we build a course, it is good!  We:

  • take orders for courses
  • document  what the SME tells us
  • design a screen-friendly version of the associated  content
  • and add a  knowledge test

Which would be well and good except that  this approach has a very low likelihood of affecting anything except perhaps our learners’ patience (and of course our available resources). Orders for courses have little relation to the real problems, SMEs can’t tell you what they actually  do, content 0n a screen doesn’t mean learners know how to or will apply it, and a quiz  isn’t likely  to lead to any meaningful change in behavior (even if it  is tarted up with racing cars).

The closer you are to the former, the better; the closer to the latter, the more likely it is that you’re quite literally wasting time and money.

Faith may not be  a bad thing for spirituality, but it’s not a particularly good basis for attempting to develop new skills.  I’ve argued that learning design really  is rocket science, and we should be taking an engineering approach.  To the extent we’re not  –  to the extent that we are implicitly accepting that a course is needed and that our linear processes are sufficient – we’re taking an approach that very much is based upon wishful thinking. And that’s not a good basis to run a business on.

It’s time to get serious about your learning.  It’s doable, with less effort than you may think.   And the alternative is really unjustifiable. So let’s get ourselves, and our industry, on a sound basis.  There’s a lot more we can do as well, but we can start by getting this part right.  Please?

‘Form’ing learning

19 July 2016 by Clark 1 Comment

Last week I ran a workshop for an online university that is working to improve it’s learning design.  Substantially.  They’re ramping up their staff abilities, and we’d talked about how I could help.  They have ‘content’, but wanted to improve the learning design around this.  While there are a number of steps to take (including how you work with SMEs, the details you attend to  in  your content, etc), their internal vocabulary talks about ‘knowledge checks’ and the goal was to do those better as they migrate existing courses to a new platform with a suite of assessment types.

So, first of all, my focus was on formative evaluation.  If we take activity-based learning seriously, we need to ensure that there are meaningful tasks set that can provide feedback.  They are fans of Make It Stick (mentioned in my Deeper eLearning reading list), so it was easy to help them recognize that good activities require learners retrieve the information  in context, so each formative evaluation should be a situation requiring a decision.

Ok, so not every formative evaluation should be such a situation. But for things that need to be known by rote, I recommend tarted-up ‘drill and kill’. And it became clear, they’re fine at developing standard knowledge checks, it’s the more important ones that needed work.

I started out reviewing the principles, not least because they had a larger audience they wanted to appreciate the background being applied.  Then we moved on to more hands-on work.  First we worked through the different types of assessment types (moving from true/false to more complex assessments like ‘submit and compare’).  We then proceeded to review a first pass to understand the overall course requirements and likely important milestone assessments. We concluded by working through some examples of tough challenges (they’d submitted) and workshopping how to revise them.

There was more behind this, including my understanding more of their context and task, but overall it appeared to develop their understanding of how  to take  formative evaluation and turn it into an opportunity to truly develop learners in ways that will benefit them after the learning experience.

Of course, focusing on decisions was a key component, and we visited and revisited the issues of working with SMEs. This included getting contexts, and how exaggeration is your friend.  The result is that they’re much better equipped to develop ‘knowledge checks’ that go far beyond knowledge, and actually develop skills that are critical to success after graduation.

This is the type of thinking that organizations from K12 through higher ed and workplace learning (whether corporate, not-for-profit, or government) need to adopt if they’re going to move to learning experiences that actually develop meaningful new abilities.  It’s also about good objectives and more, but what the learner actually does, how they are required to use the knowledge, is critical to the outcome. So, are you ready to make learning that works?

‘Checking’ In

13 July 2016 by Clark Leave a Comment

As a personal reflection, the value of checklists and forcing functions can definitely be understated.  As I mentioned, last week I went into the woods for a few days.  And while the trip didn’t live up to our plans, it was a great experience.  However, there was a particular gap that points out our cognitive limitations.

So, I have a backpacking checklist. And I look at it from time to time. What I  didn’t do this time was check it before the trip.  And I found out once I got away from home was that I’d forgotten both my bandana and my towel!  Both are useful, and while I was able to purchase a bandana ($15! but it is microfiber and large, so I’ll keep using it), I had to do without the towel (which the bandana was a poor but necessary substitute for).

We often swim or wade in the river (and did this trip too), and a towel’s handy to get dry before the breeze chills you or the horseflies descend. The bandana, well it served as a sun cover, mosquito deterrent, towel (see above), and glasses wipe. Amongst others.

Let me add that I almost left on today’s overnite biz trip without my sleep clothes!  Fortunately, I had one of those middle-of-the-nite epiphanies, and remedied this morning.

And this just isn’t a consequence of advancing age (hey, I’m still [barely] < 60!).  It’s a natural consequence of our cognitive architecture, and we have well-established processes/tools to support these gaps.  These include checklists to help us remember things, and forcing functions whereby we place things in ways that it’s hard to forget things.

As a consequence, I’m going to do two things going forward. One is to make sure I  do check my checklist. I’ll review it for comprehensiveness in the meantime, and have developed it in conjunction with another list from an experience colleague. I have another wilderness trip, and I’ll definitely check it beforehand.  Second, I’ve now put the bandana and a towel  in my backpack. So I’d actually have to take it out to forget it!

Here’s to knowing, and applying, tools to help us overcome our cognitive deficits.  What are you doing to help not make mistakes?  And what could you do similarly for your learning design processes?

eLearning Process Survey results!

21 June 2016 by Clark Leave a Comment

So, a few weeks ago I ran a survey asking about elearning processes*, and it’s time to look at the results (I’ve closed it).  eLearning process is  something I’m suggesting is ripe for change, and I thought it appropriate to see what people thoughts.  Some caveats: it’s self-selected, it’s limited (23 respondents), and it’s arguably readers of this blog or the other folks who pointed to it, so it’s a select group.  With those caveats, what did we see?

SQ1The first question was looking at how we align our efforts with business needs. The alternatives were ‘providing what’s asked for’ (e.g. taking orders), ‘getting from  SMEs’, and ‘using a process’.  These are clearly in ascending order of appropriateness. Order taking doesn’t allow for seeing if a course is needed and SMEs can’t tell you what they actually do. Creating a process to ensure a course is the best solution (as opposed to a job aid or going to the network), and then getting the real performance needs (by triangulating), is optimal.  What we see, however, is that only a bit more than 20% are actually getting this right from the get-go, and almost 80% are failing at one of the two points along the way.

SQ2The second question was asking about how the assessments were aligned with the need. The options ranged from ‘developing from good sources’, thru ‘we test knowledge’ and ‘they have to get it right’ to ‘sufficient spaced contextualized practice’, e.g. ’til they can’t get it wrong.  The clear need, if we’re bothering to develop learning, is to ensure that they can do it at the end.  Doing it ‘until they get it right’ isn’t sufficient to develop a new ability to do.  And, we see more than 40% are focusing on using the existing content! Now, the alternatives were not totally orthogonal (e.g. you could have the first response and any of the others), so interpreting this is somewhat problematic.  I assumed  people would know to choose the lowest option in the list if they could, and I don’t know that (flaw in the survey design).  Still it’s pleasing to see that almost 30% are doing sufficient practice, but that’s only a wee bit ahead of those who say they’re just testing knowledge!  So it’s still a concern.

SQ3The third question was looking at the feedback provided. The options included ‘right or wrong’, ‘provides the right answer’, and ‘indication for each wrong answer’.  I’ve been railing against one piece of feedback for all the wrong answers for years now, and it’s important. The alternatives to the wrong answer shouldn’t be random, but instead should represent the ways learners typically get it wrong (based upon misconceptions).  It’s nice (and I admit somewhat surprising) that almost 40% are actually providing feedback that addresses each wrong answer. That’s a very positive outcome.  However, that it’s not even half is still kind of concerning.

SQ4The fourth question digs into the issue of examples.  There are nuances of details about examples, and here I was picking up on a few of these. The options  ranged from ‘having’, thru ‘coming from SMEs’ and ‘illustrate the concept and context’, to ‘showing the underlying thinking’.  Again, obviously the latter is the best.  It turns out that experts don’t typically show the underlying cognition, and yet it’s really valuable for the learning. We see that we are getting the link of concept to context clear, and together with showing thinking we’re nabbing roughly 70% of the examples, so that’s a positive sign.

SQ5The fifth question asks about concepts.  Concepts are (or should be) the models that guide performance in the contexts seen across examples and practice (and the basis for the aforementioned feedback). The alternatives ranged from ‘using good content’ and ‘working with SMEs’ to ‘determining the underlying model’.  It’s the latter that is indicated as the basis for making better decisions, going forward.  (I suggest that what will helps orgs is not the ability to receive knowledge, but to make better decisions.)  And we see over 30% going to those models, but still a high percentage still taking the presentations from the SMEs. Which isn’t totally inappropriate, as they  do have access to what they learned. I’m somewhat concerned overall that much of ID seems to talk about practice and ‘content’, lumping intros and concepts and examples and closing all together into the latter (without suitable differentiation), so this was better than expected.

SQ6The sixth question tapped into the emotional side of learning, engagement. The options were ‘giving learners what they need’, ‘a good look’, ‘gamification’, and ‘tapping into intrinsic motivation’.  I’ve been a big proponent of intrinsic motivation (heck, I effectively wrote a book on it ;), and not gamification. I think an appealing visual design, but just ‘giving them what they need’ isn’t sufficient for novices: they need the emotional component too. For practitioners, of course, not so much.  I’m pleased that no one talked about gamification (yet the success of companies that sell ‘tart up’ templates suggests that this isn’t the norm). Still, more than a third are going to the intrinsic motivation, which is heartening. There’s a ways to go, but some folks are hearing the message.

SQ7The last question gets into measurement.  We should be evaluating what we do. Ideally, we start from a business metric we need to address and work backward. That’s typically not seen. The questions basically covered the Kirkpatrick model, working from ‘smile sheets’, through’ testing after the learning experience’ and ‘checking changes in workplace behavior’ to ‘tuning until impacting org  metrics’.  I was pleasantly surprised to see over a third doing the latter, and my results don’t parallel what I’ve seen elsewhere. I’m dismayed, of course, that over 20% are still just asking learners, which we know in general isn’t of particular use.

This was a set of questions deliberately digging into areas where I think elearning falls down, and (at least with this group of respondents), it’s not good as I’d hope, but not as bad as I feared.  Still, I’d suggest there’s room for improvement, given the constraints above about who the likely respondents are.  It’s not a representative sample, I’d suspect.

Clearly, there are ways to do well, but it’s not trivial. I’m arguing that we can do good elearning without breaking the bank, but it requires an understanding of the inflection points of the design process where small changes can yield important results. And it requires an understanding of the deeper elements to develop the necessary tools and support. I have been working with several organizations to make these improvements, but it’s well past time to get serious about learning, and start having a real impact.

So over to you: do you see this as a realistic assessment of where we are? And do you take the overall results as indicating a healthy industry, or an industry that needs to go beyond haphazard approaches and start practicing Learning Engineering?

*And, let me say, thanks  very much to those respondents who bothered to take the time to respond.  It was quick, but still, the effort was completely appreciated.

 

John Black #ICELW Keynote Mindmap

16 June 2016 by Clark Leave a Comment

Professor John Black of Columbia Unveristy gave a fascinating talk about how games can leverage “embodied cognition” to achieve deeper learning. The notion is that by physical enaction, you get richer activation, and sponsor deeper learning.  It obviously triggered lots of thoughts (mine are the ones in the bubbles :). Lots to ponder.

The Quinnovation eLearning Process Survey

1 June 2016 by Clark 1 Comment

In the interests of understanding where the market is, I’m looking to benchmark where organizations are. Sure, there are other data points, but I have my own questions I would like to get answered. So I’ve created a quick survey of seven questions (thanks, SurveyMonkey) I’d love for you to fill out.

My interest is in finding out about the processes used in designing and delivering elearning. While I’ve my own impressions, I thought it would be nice to bolster it with data. So here we are.
 
And I’m not asking what org you’re working for, because I’d appreciate honest answers.  Please feel free to respond and circulate to those you know in other organizations (but try to only have one person from your org fill it out).

This is an experiment (hey, that’s what innovation is all about ;), so we’ll see how it goes. I’ll report out what happens when responses start petering out (or when I hit my 100 response cap ;). I welcome your comments or questions as well. Thanks!

Create your own user feedback survey

Where do comics/cartoons fit?

31 May 2016 by Clark Leave a Comment

I’ve regularly suggested that you want to use the right media for the task, and there are specific cognitive properties of media that help determine the answer.  One important dimension is context versus concept, and another is dynamic versus static.  But I realized I needed to extend it.

MediaPropertiesNewTo start with, concepts are relationships, such as diagrams (as this one is!).  Whereas context is the actual setting. For one, you want to abstract away, for the other you want to be concrete.  Similarly, some relationships, and settings, are static, whereas others are dynamic. Obviously, here we’re talking static relationships, but if we wanted to illustrate some chemical process, we might need an animation.

So, for contextualization, we can use a photo capturing the real setting. Unless, of course, it’s dynamic and we need a video. Similarly, if we need conceptual relationships, we use a diagram, unless again if it’s dynamic and we need an animation. (By animation, I mean a dynamic diagram, not a cartoon, just as a video is a dynamic recording of a live setting, not a cartoon.)

Audio’s a funny case, in that it can be static as text or dynamic as audio. The needs change depending on where you need your attention represented: you can’t (and shouldn’t) put static text on a dynamic visual, and you can’t use video if the attention can’t be visually distracted. Audio is valuable when you can’t take your eyes away (e.g. the audio guidance on a GPS, “now turn left”).

Note that there are halfway points. You can capture a sequence of static images in lieu of a video (think narrated slide show).  Similarly, a diagram could be shown in multiple states.  And this is all ignoring interactives.  But there’s a particular place I want to go, hinted above.

I was reflecting that comics (static) and cartoons (dynamic) are  instances that don’t naturally fall out of my characterization, and realized I needed a way to consider  them.    I posit that comics/cartoons are halfway between context and concept.  They strip away unnecessary context, so that it’s easier to see what’s important, and have the potential (via, say, thought balloons) to annotate the world with the concept.  So they’re semi-conceptual, and semi-contextual.  I’ve regularly argued that we don’t use them often enough for a number of reasons, and it’s important to think where they fit.

This is my proposal: that they help focus attention on important elements without unnecessary details and the ability to elaborate (as well as the rest of the benefits: familiarity, bandwidth, etc).  So, what do you say?  Does this fit and make sense?  Are you going to use more comics/graphic novels/cartoons?

Heading in the right direction

26 May 2016 by Clark 2 Comments

Most of our educational approaches – K12, Higher Ed, and organizational – are fundamentally wrong.  What I see in schools, classrooms, and corporations are information presentation and knowledge testing.  Which isn’t bad in and of itself, except that it won’t lead to new abilities to  do!  And this bothers me.

As a consequence, I took a stand trying to create a curricula that wasn’t about content, but instead about action.  I elaborated it in some subsequent posts, trying to make clear that the activities could be connected and social, so that you could be developing something over time, and also that the output of the activity produced products – both the work and thoughts  on the work – that serve as a portfolio.

I just was reading and saw some lovely synergistic thoughts that inspire me that there’s hope. For one, Paul Tough apparently wrote a book on the non-cognitive aspects of successful learners,  How Children Succeed, and then followed it up with  Helping Children Succeed, which digs into the ignored ‘how’.  His point is that elements like ‘grit’ that have been (rightly) touted aren’t developed in the same way cognitive skills are, and yet they can be developed. I haven’t read his book (yet), but in exploring an interview with him, I found out about Expeditionary Learning.

And what Expeditionary Learning has, I’m happy to discover, is an approach based upon deeply immersive projects that integrate curricula and require the learning traits recognized as important.  Tough’s point is that the environment matters, and here are schools that are restructured to be learning environments with learning cultures.  They’re social, facilitated, with meaningful goals, and real challenges. This is about learning, not testing.  “A teacher’s primary task is to help students overcome their fears and discover they can do more than they think they can.”

And I similarly came across an article  by Benjamin Riley, who’s been pilloried as the poster-child against personalization.  And he embraces that from a particular stance, that learning should be personalized by teachers, not technology.  He goes further, talking about having teachers understand learning science, becoming learning engineers.  He also emphasizes social aspects.

Both of these approaches indicate a shift from content regurgitation to meaningful social action, in ways that reflect what’s known about how we think, work, and learn.  It’s way past time, but it doesn’t mean we shouldn’t keep striving to do better. I’ll argue that in higher ed and in organizations, we should also become more aware of learning science, and on meaningful activity.  I encourage you to read the short interview and article, and think about where you see leverage to improve learning.  I’m happy to help!

Learning in Context

4 May 2016 by Clark 1 Comment

In a recent guest post, I wrote about the importance of context in learning. And for a featured session at the upcoming FocusOn Learning event, I’ll be talking about performance support in context.  But there was a recent question about how you’d do it in a particular environment, and that got me thinking about the the necessary requirements.

As context (ahem), there are already context-sensitive systems. I helped lead the design of one where a complex device was instrumented and consequently there were many indicators about the current status of the device. This trend is increasing.  And there are tools to build context-sensitive helps systems around enterprise software, whether purchased or home-grown. And there are also context-sensitive systems that track your location on mobile and allow you to use that to trigger a variety of actions.

Now, to be clear, these are already in use for performance support, but how do we take advantage of them for learning. Moreover, can we go beyond ‘location’ specific learning?  I think we can, if we rethink.

So first, we  obviously  can use those same systems to deliver specific learning. We can have a rich model of learning around a system, so a detailed competency map, and then with a rich profile of the learner we can know what they know and don’t, and  then when they’re at a point where there’s a gap between their knowledge and the desired, we can trigger some additional information. It’s in context, at a ‘teachable moment’, so it doesn’t necessarily have to be assessed.

This would be on top of performance support, typically, as they’re still learning so we don’t want to risk a mistake. Or we could have a little chance to try it out and get it wrong that  doesn’t actually get executed, and then give them feedback and the right answer to perform.  We’d have to be clear, however, about why learning is needed in  addition to the right answer: is this something that  really needs to be learned?

I want to go a wee bit further, though; can we build it around what the learner is doing?  How could we know?  Besides increasingly complex sensor logic, we can use  when they are.  What’s on their calendar?  If it’s tagged appropriately, we can know at least what they’re  supposed to be doing.  And we can develop not only specific system skills, but more general business skills: negotiation, running meetings, problem-solving/trouble-shooting, design, and more.

The point is that our learners are in contexts all the time.  Rather than take them away to learn, can we develop learning that wraps around what they’re doing? Increasingly we can, and in richer and richer ways. We can tap into the situational motivation to accomplish the task in the moment, and the existing parameters, to make ordinary tasks into learning opportunities. And that more ubiquitous, continuous development is more naturally matched to how we learn.

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.