Learnlets

Secondary

Clark Quinn’s Learnings about Learning

From platitudes to pragmatics

4 July 2023 by Clark Leave a Comment

It’s easy to talk principle. (And I do. ;) Yet, there are pragmatics we have to deal with, as well. For instance, with ‘clients’ (internal or external), giving us their desired outcomes that are vague and unfocused. We generally don’t want to educate them about our business, yet we need more focused guidance. Particularly when it comes to designing meaningful practice. How, then, do we get from platitudes to pragmatics?

To be clear, what’s driving this is trying to create practice that will lead to actual outcomes. That’s, first, because our practice is the most tangible manifestation of the performance objectives.  Also, because it is also the biggest contributor to learning actually having an impact! We need good objectives to know what we’re targeting and then the next thing we need to do is design the practice. After we design practice, we can develop the associated content, etc. How do we get this focus?

I see several ways. Ideally, we can engage with clients in a productive conversation. We can do the advocated ‘yes and…’ approach, where we turn the conversation to the outcomes they’re looking for, and ideally even to metrics. E.g. “how will we know when we’ve succeeded?” When we hear “our sales cycle takes too long” or “our closure rate isn’t good enough” if the topic is sale, there’re metrics there. If we hear “too many errors in manufacturing” or “customer service ratings aren’t high enough”, that’s quantifiable, and we have a target.

There are other situations, however. We might not get metrics, so then we might have to infer them from the performance outcomes.  When we hear “we need sales training” or “we need to review the manufacturing process” or “we need a refresher on customer service”, it’s a bit vaguer.  We should try and dig in (“what part of sales isn’t up to scratch” or “what are customers complaining about”), but we may not always have the opportunity. Still, we can make practice assignments around these. We can provide practice around the specific associated tasks.

What really is the biggest problem is ‘awareness’ courses. “I just want folks to know this.” (Which begs the question: why?) I fear that part of the answer is a legacy belief that we’re formal logical reasoning beings and so new information will change our behavior. (NOT!) It can also be because the client just doesn’t know any better, nor have any greater insight than “if they know it, it is good”. However, I still think there’s something we can do here. Even if it’s a case of ‘easier to get forgiveness than permission’.

I think we can infer what people would do with the information. If they insist we need to be aware of harassment, or diversity, or… we can ask ourselves “what would folks do differently?” One decision is to intervene, or report, or ignore. Another might be where and how to do those things. In general, even though the requester isn’t aware, there’s something they actually expect people to do. We have to infer what that can be. Then, they can critique, but it’s more effective for the organization  and more engaging for the learner. That, to me, is a reasonable justification!

Whether it’s mapped to multiple choice questions (see Patti Shank’s seminal book on the topic), scenarios (Christy Tucker is one of our gurus), or full games (I have my own book on that ;), we need to give learners practice in dealing with the situations that use the information. I think we can work from platitudes to pragmatics, and should. What do you think?

Two steps for L&D

6 June 2023 by Clark Leave a Comment

In a conversation, we were discussing how L&D fares. Badly, of course, but we were looking at why. One of the problems is that L&D folks don’t have credibility. Another was that they don’t measure. I didn’t raise it in the conversation, but it’s come up before that they’re also not being strategic. That came up in another conversation. Overall, there are two steps for L&D to really make an impact on.

Now, I joke that L&D isn’t doing well what it’s supposed to be doing, and isn’t doing enough. My first complaint is that we’re not doing a good job. In the second conversation, up-skilling came up as an important trend. My take is that it’s all well and good to want to do it, but if you really want persistent new skill development, you have to do it right! That is, shooting for retention and transfer. Which will be, by the way, the topic of my presentation at DevLearn this year, I’ve just found out. Also the topic of the Missing LXD workshop (coming in Asia Pacific times this July/Aug), in linking that learning science grounding to engagement as well.

I’ve argued that the most important thing L&D can do is start measuring, because it will point out what works (and doesn’t). That’s a barrier that came up in the first conversation; how do we move people forward in their measurements. We were talking about little steps; if they’re doing learner surveys (c.f. Thalheimer), let’s encourage them to move to survey some time after. If they’re doing that, let’s also have them ask supervisors. Etc.

So, this is a necessary step. It’s not enough, of course. You might throw courses at things where they don’t make sense, e.g. where performance support would work better. Measurement should tell you that, in that a course isn’t working, but it won’t necessarily point you directly to performance support. Still, measurement is a step along the way. There’s another step, however.

The second thing I argue we should do is start looking at going beyond courses. Not just performance support, but here I’m talking about informal and social learning, e.g. innovation. There are both principled and practical reasons for this. The principled reason is that innovation is learning; you don’t know the answer when you start. Thus, knowing how learning works provides a good basis for assisting here. The practical reason is it gives a way for L&D to contribute to the most important part of organizational success. Instead of being an appendage that can be cut when times are tough, L&D can be facilitating the survival and thrival strategies that will keep the organization agile.

Of course, we’re running a workshop on this as well. I’m not touting it because it’s on offer, I’m behind it because it’s something I’ve organized specifically because it’s so important! We’ll cover the gamut, from individual learning skills, to team, and organizational success. We’ll also cover strategy. Importantly, we have some of the best people in the world to assist! I’ve managed to convince  Harold Jarche, Emma Weber, Kat Koppett, and Mark Britz (each of which alone would be worth the price of entry!), on top of myself and Matt Richter. Because it’s the Learning Development Accelerator, it will be evidence-based. It’ll also be interactive, and practically focused.

Look, there are lots of things you can do. There are some things you should do. There are two steps for L&D to do, and you have the opportunity to get on top of each. You can do it any way you want, of course, but please, please start making these moves!

A placebo effect?

30 May 2023 by Clark 1 Comment

I was thinking about what too often we see as elearning. That is, the usual content dump and knowledge test. There’s good reason to believe that it isn’t effective. So, why are we seeing it continue? Is it a placebo effect?

I tend to view this as a superstition. That is, the belief that information presentation will lead to behavior change is held implicitly. I think it originates from a legacy perspective that we’re logical, and therefore new information will yield impact. (Not.) Regardless, it exists.

I was inclined to wonder if, really, it’s a placebo. That is, doing something with a hope that things change, but the onus is on the individual, not the intervention. There’s not going to be any actual effect, but it makes people feel better. Of course, the role is different here; the placebo makes the doctor feel better! (Or the health system? I’m muddling my metaphor…:)

It may not be that in practice, of course. There is a ‘faith’ that “if we build it, it is good”. So, biz units can ask for a course, and get one. They’ve provided content and access to SMEs. However, they push back when asked “what’s the actual problem”, let alone asked for measures.  It’s like they think the job can be done with information. They are happy with the appearance of a solution, because it’s easy, and no one’s checking.

We, of course, have to change this perception. If we continue to let folks believe they can give us content and we’ll deliver meaningful change, shame on us. Of course they don’t care about the measures, and they want things to be easy. We have to, however.  It may as well be a placebo effect, because the ultimate impact is a likely null as sugar pills, unless the patient wants to change. It’s probably not a great metaphor, but somehow it still seems apt. Thoughts?

Grounded in practice

16 May 2023 by Clark Leave a Comment

Many years ago, I was accused of not knowing the realities of learning design. It’s true that I’ve been in many ways a theorist, following what research tells us, and having been an academic. I also have designed solutions, designed design processes, and advised orgs. Still, it’s nice to be grounded in practice, and I’ve had the opportunity of late.

So, as you read this, I’m in India (hopefully ;), working with Upside Learning. I joined them around 6 months ago to serve as their Chief Learning Strategist (on top of my work as Quinnovation, as co-director of the Learning Development Accelerator, and as advisor to Elevator9). They have a willingness to pay serious attention to learning science, which as you might imagine, I found attractive!

It’s been a lot of marketing: writing position papers and such. The good news is it’s also been about practice. For one, I’ve been running workshops for their team (such as the Missing LXD workshop with the LDA coming up in Asia-friendly times this summer). We’ve also created some demos (coming soon to a sales preso near you ;). I’ve also learned a bit about their clients and usual expectations.

It’s the latter that’s inspiring. How do we bake learning science into a practical process that clients can comprehend? We’re working on it. So far, it seems like it’s a mix of awareness, policy, and tools. That is, the design team must understand the principles in practice, there need to be policy adjustments to support the necessary steps, and the tools should support the practice. I’m hoping we have a chance to put some serious work into these in my visit.

Still, it’s already been eye-opening to see the realities organizations face in their L&D roles. It only inspires me more to fight for the changes in L&D that can address this. We have lots to offer orgs, but only if we move out of our comfort zone and start making changes. Here’s to the revolution L&D needs to have!

 

Curriculimb

9 May 2023 by Clark 3 Comments

Ok, so I’m going to go out on a limb here, and talk a wee bit about what I’ve been learning about designing curricula. I care about doing it right (and probably haven’t always). It’s not the average course that’s the issue, but big ones, or multiple courses addressing skill gaps. It’s been challenging to find a systematic approach, which is why I’m teetering on a curriculimb.

So, the issue is how to develop a curriculum. I know in higher ed (I was there once) it tends to be a process of figuring out what content they need, and distributing across courses. It’s probably more art than science, where you move stuff around until it feels like you’ve got the right sized amount of content for each subject and it covers the ‘right stuff’. How people meet the criteria can vary. In a more research institution, I could design my HCI course my way. In more teaching-focused institutions, people may actually be given course syllabi to teach to!

My problem is when I have an uncertain amount of content, say for a large domain, and I want to develop specific capabilities. On principle, we should work backwards from the final performance. Which might include some very rich types of capabilities, so we might have a lot of concepts and practice involved. We’d need to create a large map. We might even break it up into conceptual stages (e.g. with programming: learning conditionals and then loops), and addressing them separately.

You probably also need to provide some practice to deal with misconceptions. That is, where are folks likely to get off track and maybe discouraged? Then you want to create practice for that. The things you’d rather they learned before it matters.

When I looked for good principles around this, it seemed like most of what I found basically said it’s iterative, there are no overarching principles (except work backwards and iterate ;). Which was less than satisfying, and some evidence-based practice would be nice.

Now, one of the things I was pondering in the dark of the night was how AI could help. I’ve been hearing how it can parse content and create maps. However, I also realized that to do so, it needs well-structured content. Kind of a circular argument. I think we need people to define it then AI can align it.

Again, right now it seems more like an art than a science. And I get that; it’s a lot like designing in engagement: create a first best guess and then test. Still, there are some solid results in engagement that give us some grounds for the first pass. I feel less like that at the next level up. So, I’m out on a curriculimb, and welcome help getting down!

Tradeoffs in aesthetics

28 March 2023 by Clark Leave a Comment

For the LDA debate this month, Ruth Clark talked to Matt Richter and I about aesthetics in learning. Ruth, you should know, is the co-author of eLearning and the Science of Instruction, amongst other books, a must-have which leverages Rich Mayer’s work on multimedia learning. Thus, she’s knowledgeable about what the research says. What emerged in the conversation was a problem about tradeoffs in aesthetics, that’s worth exploring.

So, for one thing, we know that gratuitous media interferes with learning. From John Sweller’s work on cognitive load theory, we know that processing the unnecessary data reduces cognitive resources available to support learning. There’s usually enough load just with the learning materials. Unless the material materially supports learning, it should be avoided.

On the other hand, we also know that we should contextualize learning. The late John Branford’s work with the Cognitive Technology Group while at Vanderbilt, for instance, demonstrated this. As the late David Jonassen also demonstrated with his problem-based learning, we retain and transfer better with concrete problems. Thus, creating a concrete setting for applying the knowledge is of benefit to learning.

What this sets up, of course, is a tradeoff. That is, we want to use aesthetics to help communicate the context, but we want to keep them minimal. How do we do this? Even text (which is a medium), can be extraneous. There really is only one true response. We have to create our first best guess, and then we test. The testing doesn’t have to be to the level of scientific rigor, mind you. Even if it just passes the scrutiny of fellow team members, it can be the right choice, though ideally we run it by learners.

What we have to fight is those who want to tart it up. There will be folks who want more aesthetics. We have to push back against that, particularly if we think it interferes with learning. We need to ensure that what’re producing doesn’t violate what’s known. It’s not always easy, and in situations we may not always win, but we have to be willing to give it a go.

There are tradeoffs in aesthetics, so we have to know what matters. Ultimately, it’s about the learning outcomes. Thus, focusing on the minimum contextualization, and the maximum learning, is likely to get us to a good first draft. Then, let’s see if we can’t check. Right?

Time is the biggest problem?

21 March 2023 by Clark 1 Comment

In conversations, I’ve begun to suspect that one of the biggest, if not the biggest, problem facing designers wishing to do truly good, deep, design, is client expectations. That is, a belief that if we’re provided with the appropriate information, we can crank out a solution. Why, don’t you just distribute the information across the screen and add a quiz? While there are myriad problems, such as lack of knowledge of how learning works, etc, folks seem to think you can turn around a course in two weeks. Thus, I’m led to ponder if time is the biggest problem.

In the early days of educational technology, it was considered technically difficult. Thus, teams worked on instantiations: instructional designers, media experts, technologists. Moreover, they tested, refined, and retested. Over time, the tools got better. You still had teams, but things could go faster. You could create a draft solution pretty quickly, with rapid tools. However, when people saw the solutions, they were satisfied. It looks like content and quizzes, which is what school is, and that’s learning, right? Without understanding the nuances, it’s hard to tell well-produced learning from well-designed and well-produced learning. Iteration and testing fell away.

Now, folks believe that with a rapid tool and content, you can churn out learning by turning the handle. Put content into the hopper, and out comes courses. This was desirable from a cost-efficiency standpoint. This gets worse when we fail to measure impact. If we’re just asking people whether they like it, we don’t really know if it’s working. There’s no basis to iterate! (BTW, the correlation for learner assessment of learning quality, and the actual quality, is essentially zero.)

For the record, an information dump and knowledge test is highly unlikely to lead to any significant change in behavior (which is what learning we are trying to accomplish). We need is meaningful practice, and to get that right requires a first draft, and fine tuning. We know this, and yet we struggle to find time and resources to do it, because of expectations.

These expectations of speed, and unrealistic beliefs in quality, create a barrier to actually achieving meaningful outcomes. If folks aren’t willing to pay for the time and effort to do it right, and they’re not looking at outcomes, they will continue to believe that what they’re spending isn’t a waste.

I’ve argued before that what might make the biggest impact is measurement. That is, we should be looking to address some measurable problem in the org and not stop until we have addressed it. With that, it becomes easier to show that the quick solutions aren’t having the needed impact. We need evidence to support making the change, but I reckon we also need to raise awareness. If we want to change perception, and the situation, we need to ensure that others know time is the biggest problem. Do you agree?

Misconceptions?

28 February 2023 by Clark 6 Comments

Several books ago, I was asked to to talk about myths in our industry. I ended up addressing myths, superstitions, and misconceptions. While the myths persist, the misconceptions propagate, aided by marketing hype. They may not be as damaging, but they also are a money-sink, and contribute to the lack of our industry making progress. How do we address them?

The distinctions I make for the 3 categories are, I think, pretty clear. Myths are beliefs that folks will willingly proclaim, but are contrary to research. This includes learning styles, attention span of a goldfish, millennials/generations, and more (references in this PDF, if you care). Superstitions are beliefs that don’t get explicit support, but manifest in the work we do. For example, that new information will lead to behavior change. We may not even be aware of the problems with these! The last category is misconceptions. They’re nuanced, and there are times when they make sense, and times they don’t.

The problem with the latter category is that folks will eagerly adopt, or avoid, these topics without understanding the nuances. They may miss opportunities to leverage the benefits, or perhaps more worrying, they’ll spend on an incompletely-understood premise. In the book, I covered 16 of them:

70:20:10
Microlearning
Problem-Based Learning
7 – 38 – 55
Kirkpatrick
NeuroX/BrainX
Social Learning
UnLearning
Brainstorming
Gamification
Meta-Learning
Humor in Learning
mLearning
The Experience API
Bloom’s Taxonomy
Learning Management Systems

On reflection, I might move ‘unlearning’ to myths, but I’d certainly add to this list. Concepts like immersive learning, workflow learning, and Learning Experience Platforms (LXPs)  are some that are touted without clarity. As a consequence, people can be spending money without necessarily achieving any real outputs. To be clear, there are real value in these concepts, just not in all conceptions thereof. The labels themselves can be misleading!

In several of my roles, I’m working to address these, but the open question is “how?” How can we illuminate the necessary understanding in ways that penetrate the hype? I truly do not know. I’ve written here and spoken and written elsewhere on previous concepts, to little impact (microlearning continues to be touted without clarity, for instance). At this point, I’m open to suggestions. Perhaps, like with myths, it’s just persistent messaging and ongoing education. However, not being known for my patience (a flaw in my character ;), I’d welcome any other ideas!

Thinking artificially

21 February 2023 by Clark Leave a Comment

I finally put my mitts on ChatGPT. The recent revelations, concern, and general plethora of blather about it made me think I should at least take it for a spin around the block. Not surprisingly, it disappointed. Still, it got me thinking about thinking artificially. It also led me to a personal commitment.

What we’re seeing is a two-fold architecture. On one side is a communication engine, e.g. ChatGPT. It’s been trained to be able to frame, and reframe, text communication. On the other side, however, must be a knowledge engine, e.g. something to talk about. The current instantiation used the internet. That’s the current problem!

So, when I asked about myself, the AI accurately posited two of my books. It also posited one that as far as I know, doesn’t exist! Such results are not unknown. For instance, owing to the prevalence of the learning styles myth (despite the research), the AI can write about L&D and mention styles as a necessary consideration. Tsk!

The problem’s compounded by the fact that many potential knowledge bases, beyond the internet, have legacy problems. Bias has been a problem in human interactions, and records thereof can also therefore have bias. As I (with co-author Markus Bernhardt) have opined, there is a role for AI in L&D, but a primary one is ensuring that there’s good content for an AI engine to operate on. Another, I argue, is to create the meaningful practice that AI currently can’t, and is likely true for the foreseeable future. I also have yet to see an AI that can create a diagram (tho’ that, to me, isn’t as far-fetched, depending on the input).

I have heard from colleagues who find the existing ChatGPT very valuable. However, they don’t take what it says as gospel, instead they use it as a thinking partner. That is, they’ll prompt it with thoughts they’re having to see what comes up. The goal is to get some lateral input to consider (not take as gospel). It’s a way to consider ideas they may have missed or not seen, which is a valuable role.

At this point, I may or may not use AI in this way, as a thinking (artificially) partner. I’ll have to experiment. One thing I can confidently assert is that everything you read (e.g. here) that is truly from me (i.e. there’s the possibility I will be faked ) will be truly from me. I’m immodest enough to think that my writing is not in need of artificial enhancement. I may be wrong, but that’s OK with me. I hope it is with you, too!

It’s complex

7 February 2023 by Clark Leave a Comment

In a recent conversation, I was talking about good design. Someone asked a question, and I elaborated that there was more to consider. Pressed again, I expanded yet more. I realized that when talking good learning design, it’s complex. However, knowing how it’s complex is a first step. Also, there are good guidelines. Still, we will have to test.

I’m not alone in suggesting that, arguably, the most complex thing in the known universe is the human brain. I jokingly ask whether bullet points are going to lead to sustained changes in behavior in such a complex organism? Yet, I also tout learning science design principles that help us. Is there a resolution?

The complexity comes from a number of different issues. For one, the type, quantity, challenge, and timing of practice depends on multiple factors. Things that can play a role include how complex the task is, how frequently it’s performed, and how important the consequences are. Similarly, the nature of the topic, whether it’s evolutionarily primary or secondary can also have an influence. The audience, of course, makes a difference, as does the context of practice. Addressing the ‘conative’ element – motivation, anxiety, confidence – also require some consideration.That’s a lot of factors!

Yet, we know what makes good practice, and we can make initial estimates of how much we need. Likewise, we can choose a suite of contexts to be covered to support appropriate transfer. We have processes as well as principles to assist us in making an initial design.

Importantly, we should not assume that the first design is sufficient. We do, unfortunately, and wrongly. Owing to the complexity of items identified previously, even with great principles and practices, we should expect that we’ll need to tune the experience. We need to prototype, test, and refine. We also need to build that testing into our timelines and budgets.

There is good guidance about testing, as well. We know we should focus on practice first, using the lowest technology possible. We should test early and often. Just as we have design guidance, these are practices that we know assist in iterating to a sufficient solution. Similarly, we know enough that it shouldn’t take much tuning since we should be starting from a good basis.

Using the cognitive and learning sciences, we have good bases to start from on the way to successful performance interventions. We have practices that address our limitations as designers, and the necessities for tuning. We do have to put these in practice in our planning, resourcing, and executing. Yet we can create successful initiatives reliably and repeatedly if we follow what’s known, including tuning. It’s complex, but it’s doable. That’s the knowledge we need to acknowledge, and ensure we possess and apply.

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok