Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Making transformation manifest

7 January 2020 by Clark Leave a Comment

I’ve been on a ‘transformation‘ kick. And it occurs to me to think that it may be more marketing than meaning. One aspect is that we need to be making transformation manifest to our learners.

The transformation I’m talking about is our learning experiences. That is, while we need to transform our practices and ourselves, here I’m talking about the outcomes of our design. I want the experience to be more than just mundane, I want it to be transformative.

And, I realize, learning  inherently is transformative. You’re coming away with new understandings, skills, and/or perspectives. If it’s effective, you’ve changed. That’s what learning is about! It does need to be effective, but if it is, you’re newly capable.

So it’s not about helping folks transform. That’s happening. Instead, it’s about helping people  realize that they’re transformed. Which is more about marketing, really.

What’s involved in making transformation manifest? To make it work it takes steps at the beginning and the end of the experience. At the beginning, it’s helping learners realize two things:

  • That they do want to learn this
  • That they don’t know it now

There are a variety of ways these can be done. We can make it humorously or dramatically clear why this is important. And we can let them actually try, or convince them that their knowledge isn’t up to the necessary level.

Ultimately, we need to make it clear that they now have a new ability. They may also need to have the confidence to try when the situation arises. Again, I’ll suggest two steps:

  • They need to know that they now have a new capability
  • Introduce them, directly or indirectly, to the community of folks who also have this ability

To demonstrate their new ability, they need to actually perform and succeed. And we should acknowledge, even celebrate, their new capability. We should be explicit, and even consider a ritual that signifies their accomplishment.  Badges could make sense here.

The point is that learners  are changing. If we’ve done our job right, they’re looking at the world in a somewhat different way. We should help them recognize that, for their sake and ours. They should know their new capability, and they should acknowledge, even respect, our contribution. I’m suggesting we should explicitly be making transformation manifest. What do you think?

Upcoming TK2020’s new approach

24 December 2019 by Clark Leave a Comment

Amongst the conferences I go to (frequently the eLearning Guild events, others as invited or doing something) is ATD’s Techknowledge. And I’ll be there again this coming year (get 10% with this code: 30TK2020). And while I think both offerings are of interest, one is more problematic. So I’m asking your help in dealing with the upcoming TK2020’s new approach.

It’s in San Jose, which is always nice since it means I don’t have to get on a plane. (I don’t object, but I’d prefer to train or drive.). It’s at the beginning of February (5-7), which  can be a quiet time. Also, Downtown San Jose has some really nice dining options (e.g. the mega food court at San Pedro Square Market). And the weather’s unlikely to be icy or snowy. Maybe some rain, but tolerable temperatures. So it’s convenient all around.

One session I’m doing is a traditional one hour presentation. This is one I trialed on my local chapter, and I enjoyed it and it seemed they did too. It’s about how learning science suggests changes to curriculum and pedagogy. (Officially it’s “Transforming Learning: A Learning Science-Based Curriculum and Pedagogy.”) It’s very LXD, and I think there are some interesting and challenging observations in it. In particular, I’ll be bringing in the Free Energy principle and it’s implications about why learning can and should be transformative. And more.

The other session is something new in format. They’re being adventurous, and kudos to them. They’re creating a suite of stages doing a variety of different themes (in their words):

  • In the Build area, you‘ll engage in hands-on learning and experimentation with the latest learning technologies.
  • The Disrupt area will feature ten hyper-focused facilitator-led conversations about industry issues.
  • At the Spark area you’ll find your next big idea through mini-sessions and discussions on emerging trends.
  • At the Connect area, you‘ll participate in structured topic or industry-focused networking with your peers.
  • The Advance area will allow you to hone your skills in specific areas by participating in accelerated, mini-sessions and discussions.
  • At the Explore area you‘ll examine case studies of named organizations for new ideas and inspiration. Play sparks creativity, and what you do here will ignite your potential.

My session is in the Disrupt area, and not surprisingly the topic is myths. Well, the official title is “Professionalism in Practice: Resisting Hype, Myths, Superstitions, and Misconceptions.” The issue is what to do!

I have 30 minutes. And I can see several things to do. The question is, which one is most appealing/interesting, and effective? So I’m hoping you’ll help determine what I should be doing for the  upcoming TK2020’s new approach.

Some options:

Make it just a Question and Answer session. I could open it up to whatever people would like to hear about myths and how to be prepared to withstand them.

Another option would be to do it as a slightly game show event; I did this with Jay Cross one time. I’d pick nine topics, put them up on the screen in a 3 x 3 grid, and address them in the order people choose.

In the spirit of the description, I’m  not going to just give a presentation, but “hyper-focused” means what? Maybe wrap a format around several top myths? (How many can I do in 30 minutes?) Asking attendees “what makes this appealing”? Then a brief explanation of why it’s wrong. Then “what might you do instead?” And, finally “how can you prevent this?”

Or, focusing on the ‘resist’, I could crowd-source ideas around a general model of resistance. Asking, in some order: “Where do myths come from?” “Who can you trust?” “What’s good evidence?” “How would you do it yourself?” “What’s a practical process we can use?”

Or something else?

Obviously, I’m not short of ideas, but converging is challenging, I can see pluses and minuses on each. So, I thought I’d ask you all what you think about how I should adapt to the upcoming TK2020’s new approach. Feedback not just welcome, but eagerly solicited!

So, c’mon, give me a gift here! (Obligatory season imprecation; or of course an interesting project for your organization ;).  And happy holidays to you and yours, and all the best for the coming year.

Content systems not content packages

17 December 2019 by Clark 1 Comment

In a conversation last week (ok, an engagement), the topic of content systems came up. Now this is something I’ve argued for before, in several ways. For one, separate content from how it’s delivered. And, pull content together by rules, not hardwired. And it’s also about the right level of granularity. It’s time to revisit the message, because I thought it was too early, but I think the time is fast coming when we can look at this.

This is in opposition to the notion of pre-packaged content. MOOCs showed that folks want to drill in to what they need. Yet we still pull everything together and launch it as a final total solution. We are moving to smaller chunks (all for the better; even if it is burdened with a misleading label). But there’s more.

The first point is about content models. That we should start designing our content into smaller chunks. My heuristic is the smallest thing you’d give one person or another. My more general principle is that resolves to breaking content down by it’s learning role: a concept model is different than an example is different than a practice.

This approach emerged from an initiative on an adaptive learning system I led. It now has played out as a mechanism to support several initiatives delivering content appropriately. For one, it was supporting different business products from the same content repository. For another it was about delivering the right thing at the right time.

Which leads to the second point, about being able to pick and deliver the right thing  for the context.  This includes adaptive systems for learning, but also context-based performance support. With a model of the learner, the context, and the content, you can write rules that put these together to optimally identify the right thing to push.

You can go further. Think of two different representatives from the same company visiting a client. A sales person and a field engineer are going to want different things in the same location. So you can add a model of ‘role’ (though that can also be tied to the learner model).

There’s more, of course. To do this well requires content strategy, engineering, and management. Someone put it this way: strategy is what you want to deliver, engineering is how, and management is overseeing the content lifecycle.

Ultimately, it’s about moving from hardwired content to flexible delivery. And that’s possible and desirable. Moreover, it’s the future. As we see the movement from LMS to LXP, we realize that it’s about delivering just what’s needed when useful. Recognizing that LXPs are portals, not about creating experiences, we see the need for federated search.

There’s more: semantics means we can identify what things are (and are not), so we can respond to queries. With chatbot interfaces, we can make it easier to automate the search and offering to deliver the right thing to the right person at the right time.

The future is here; we see it in web interfaces all over the place. Why aren’t we seeing it yet in learning? There are strong cognitive reasons (performance support, workflow learning, self-directed and self-regulated learning).  And the technology is no longer the limitation. So let’s get on it. It’s time to think content systems, not content packages.

 

Unpacking some nuances

3 December 2019 by Clark Leave a Comment

In my book  Engaging Learning, I had a suite of elements  for both effective education practice and engaging experiences. Of course, the point was that they perfectly aligned. However, I’m unpacking a couple of them, and I thought it’d be helpful to ensure that I am clear about them, and so that you are too. So I’m unpacking some nuances in two different groups of elements.

For one, I talk about contextualized, anchored, and relevant.  These three are related, but each plays a different role, and it’s important to be concerned about each separately.

Contextualized isn’t difficult. Research (e.g. Jonassen’s work) has shown that we perform better working from problems that are concrete rather than abstract. (Which is why those abstract problems kids are assigned in schools are so  wrong!). We work better with concrete problems with facilitation to support abstraction and transfer. Otherwise we get ‘inert knowledge’, knowledge that we can pass a test on, but will never even activate in a relevant problem situation.

Anchored, here, means that it’s a real use of the knowledge. Instead of using problems about fractions of a crayon, for instance, it might be about serving food (pies, pizza). Similarly, engineering equations about curves could be used for a roller coaster instead of a abstract pattern. The activity using the knowledge should be the way it’d be used in the world.

That’s related to, but different from, being relevant. Not relevant to the learning, but to the learner.  That is, the problem they’re solving is one that the learner cares about. So, for maritime enthusiasts, we might use geometry to figure out sail angles to the wind. While for gamers, we might use it to calculate graphics.

The second dichotomy is about active versus exploratory. They’re related, but each has an independent component.

For exploratory, I’m talking about learners having choices. That is, there are alternate choices of action. They can choose one or the other. The alternatives to the right answer, by the way, isn’t obvious or silly, but instead represents reliable ways learners get it wrong.

For active, I mean they must commit. It’s not enough to roll over the options and see the feedback, they have to choose one, and then see if it was right or not. And give consequences of their choices before feedback!

Unpacking some nuances helps, I hope, to ensure you address each separately, and consequently appropriately.  Here’s to nuanced design!

Templates for good

29 October 2019 by Clark Leave a Comment

In terms of the various ways in which we can support the gaps in our cognition, one of the terms is ‘templates’. And it’s worth discussing what a template is, and considering them at a variety of levels. I want to suggest we should have templates for good.

What is a template? Merriam-Webster defines it as “a gauge, pattern, or mold used as a guide to the form of a piece being made”. In terms of software and business, templates are forms with some of the elements already completed. Instead of starting from scratch, pieces are already done, and there are slots for various information.

Why use templates?  With them, it’s easier to do design. They make it easy to accomplish particular goals. They can make it easy to build particular types of outputs, and make them more systematic and consistent. For better or worse.

How does that change for learning? Here, a template tends to be a framework for particular interactions. For example, there are the tarted-up quiz show formats. With more depth, we can provide guides for learning, suggesting quality elements. We might have a place in our examples for the underlying thinking. Or we could  structure practice as decision making.

But we can have templates at higher levels, too. For instance, we can ask that the objective include elements of measurable evaluation, and carry that forward through the final practice design. We can go beyond that, and have structures to guide doing good curricula design.

If I have to choose, of course, I’ll go for substance over style. I’d rather your templates suggest good design than flashy but insubstantial experience. It’s time to be doing evidence-based learning instead of gaudy but rote experiences. If we’re going to have templates, let’s have templates for good.

Play to Learn

17 October 2019 by Clark Leave a Comment

Tic Tac Toe gameThinking more about Friston’s Free Energy Principle and the implications for learning design prompted me to think about play. What drives us to learn, and then how  do we learn?  And play is the first answer, but does it extend? Can we play to learn beyond the natural?

The premise behind the Free Energy principle is that organisms (at every level) learn to minimize the distance between our predictions and what actually occurs. And that’s useful, because we use our predictions to make decisions. And it’s useful if our decisions get better and better over time. To do that, we build models of the world.

Now, it’d be possible for us to just sit in a warm dark room, so our predictions are right, but we have drives and needs. Food, shelter, sex, are drives that can at least occasionally require effort. The postulate is that we’ll be driven to learn when the consequences of not learning are higher than the effort of learning.

At this level, animals play to learn things like how to hunt and interact. Parents can help as well.  At a higher level than survival, however, can play still work? Can we learn things like finance, mathematics, and other human-made conceptions this way? It’d be nice to make a safe place to ‘play’, to experiment.

Raph Koster, in his  A Theory Of Fun,  tells us that computer games are fun just because they require learning. You need to explore, and learn new tricks to beat the next level.  And computer games can be about surviving in made-up worlds.

The point I’m getting to is that the best learning should be play; low stakes exploration, tapping into the elements of engagement to make the experience compelling. You want a story about what your goal is, and a setting that makes that goal reasonable, and more.

To put it another way, learning  should be play. Not trivial, but ‘hard fun’.  If we’re not making it safe, and providing guided discovery to internalize the relationships they need, to build the models that will make better decisions, we’re not making learning as naturally aligned as it can be. So please, let your people play to learn. Design learning experiences, not just ‘instruction’.

 

Tools for LXD?

24 September 2019 by Clark Leave a Comment

I’ve been thinking on LXD for a while now, not least because I’ve an upcoming workshop at DevLearn in Lost Wages in October. And one of the things I’ve been thinking about are the tools we use for LXD. I’ve created diagrams (such as the Education Engagement Alignment), and quips, but here I’m thinking something else. We know that job aids are helpful; things like checklists, and decision trees, and lookup tables. And I’ve created some aids for the Udemy course on deeper elearning I developed. But here I want to know what  you are using as tools for LXD? How do you use external resources to keep your design on track?

The simple rationale, of course, is that there are things our brains are good at, and things they’re not. We are pattern-matchers and meaning-makers, naturally making up explanations for things that happen. We’re also creative, finding solutions under constraints. Our cognitive architecture is designed to do this; to help us adapt to the first-level world we evolved in.

However, our brains aren’t particularly good at the second-level world we have created. Complex ideas require external representation. We’re bad at remembering rote and arbitrary steps and details. We’re also bad at complex calculations.  This makes the case for tools that help scaffold these gaps in our cognition.

And, in particular, for design. Design tends to involve complex responses, in this case in terms of an experience design. That maps out over content, time, and tools. Consequently, there are opportunities to go awry. Therefore, tools are a plausible adjunct.

You might be using templates for good design. Here, you’d have a draft storyboard, for instance, that insures you’re including a meaningful introduction, causal conceptual model, examples, etc. Or you might have a checklist that details the elements you should be including. You could have a model course that you use as a reference.

My question, to you, is what tools are you using to increase the likelihood of a quality design, and how are they working for you?  I’d like to know what you’ve found helpful as tools for LXD, as I look to create the best support I can. Please share!

Clear about the concept

19 September 2019 by Clark Leave a Comment

I went to hear a talk the other day. It was about competency-based education (CBE) for organizations. Ostensibly. And, while I’m now affiliated with IBSTPI, it’s not like I’m a competency expert. And maybe I expect too much, but I really hope for people to be clear about the concept. Alas, that’s not what I found.

So, it started out reasonably well, talking about how competencies are valuable. There were a number of points, and many made sense, although some were redundant. Maybe I missed some nuance? I try to be open-minded. It’s about creating clear definitions of performance, and aligning those with assessments. Thus, you’re working on very clear descriptions of what people should be doing.

It got  interesting when the speaker decided to link CBE to Universal Design for Learning (UDL).  And it’s a good program.  UDL talks about using multiple representations to increase the likelihood for different learners to be able to comprehend and respond. This, in the talk, was mapped to three different segments: engaging the learners in multiple ways, communicating concepts in multiple ways, and allowing assessment in multiple ways. And this is good. For learning. Does it make sense for CBE?

To start, the argument was, you should make the rationale for the learning in multiple ways. While in general CBE inherently embodies meaningfulness in the nature of clear and needed skills, I don’t have a problem with this. I argue you should hook learners in emotionally  and cognitively, and those can be separate activities. There was a brief mention of something like ‘learning styles’, but while now wary, I was ready to let it go.

However, the talk went on to make a case for multiple representations of content. And here the slide  explicitly  said ‘learning styles’ and used VARK. And don’t get me wrong, multiple representations and media are good,  but not for learning styles! The current status is that there’s essentially no valid instrument to measure learning styles, and no evidence that even if you did, that it makes a difference. None. So, of course, I raised the issue. And we agreed that maybe not for learning styles, but multiple representations weren’t bad.

The final point was that there could be multiple forms of assessment. At this point, I wasn’t going to interrupt again, but at the end of the session raised the point that the critical element of CBE is aligning the assessment with the performance! You can’t have them do an interpretative dance about identifying fire hazards, for instance, you have to have them identify fire hazards! So, here the audience ultimately agreed that variability was acceptable  as long as it measured the actual performance. Again, I don’t think the speaker was clear about the concept.

There were two major flaws in this talk. One was casually mashing up a couple of essentially incommensurate ideas. CBE and UDL aren’t natural partners. There can be overlapping concepts, but… The second, of course, is using a popular but fundamentally flawed myth about learning. If you’re going to claim authority, don’t depend on broken concepts.

To put it another way, I think it’s fair to expect speakers to be clear about the concept. (Either that, or maybe the lesson is that Clark shouldn’t be allowed to listen to normal speakers. ;)  Please, please, know what you’re talking about before you talk about it. Is that too much to ask?

Working with you

11 September 2019 by Clark Leave a Comment

I was talking with my better half, who’s now working at a nursery. Over time, she has related stories of folks coming to ask for assistance. And the variety is both interesting and instructive. There’s a vast difference of how people can be working with you.

So, for one, she likes to tell stories of people who come in saying “you know, I want something ‘green'”.  Or, worse, “I want a big tree that doesn’t require any watering at all”. (Er, doesn’t exist.)  The one she told me today was this lady who came in wanting “you know, it’s white and grows like <hand gesture showing curving over like a willow>”.  So m’lady showed her a plant fitting the description. But “no, it’s not got white  flowers”.  It ended up being a milkweed, which isn’t white and stands straight up!

What prompted this reflection was the situation she cited of this other customer. He comes in with a video of the particular section he wants to work on this time, with measurements, and a brief idea of what he’s thinking. Now this is a customer that’s easy to help; you can see the amount of shade, know the size, and have an idea of what the goal is.

I related this (of course ;), to L&D. What you’d  like is the person who comes and says “I have this problem: performance should be <desired measurement> but instead it’s only <current measurement>. What steps can we take to see if you can help?”  Of course, that’s rare.  Instead you get “I need a course on X.”  At least, until you start changing the game.

JD Dillon tweeted “…But in real life they can’t just say NO to the people who run the organization. ‘Yes, and …’ is a better way to get people to start thinking differently.” And that’s apt. If you’ve always said “yes”, it’s really not acceptable to suddenly start saying “no”.  Saying “Yes and…” is a nice way to respond. Something like “Sure, so what’s the problem you’re hoping this course will solve?”

And, of course, you should be this person too. “Let me tell you why I’d like to buy a VR headset,” and go on to explain how this critical performance piece is spatial and visceral and you want to experiment to address it. Or whatever. Come at it from their perspective, and you have a better chance, I reckon.

You won’t always get the nice customers, but if you take time and work them through the necessary steps at first, maybe you can change them to be working with you. That’s better than working for them, or fighting with them, no?

Craft and commercial?

10 September 2019 by Clark Leave a Comment

Occasionally I try to look at the broader swings we see (in a variety of things). In learning technology, there’s been a gross pendulum swing, and maybe smaller ones. I think we’ve swung between craft and commercial approaches to design, and I’m hoping we’re on a return swing.

When we first started playing with learning technology, every approach was pretty much hand-crafted. We didn’t have specific tools for learning outcomes, and we had to apply generic tools like computer systems and the like. Early approaches like Plato were custom crafted, and the individual applications on top of that. And a small industry was built upon this basis to build solutions at scale, but the market never emerged. The whole solution was too costly, despite the power.

The PC revolution initially meant individuals or small teams built solutions. There did emerge authoring systems (e.g. Pilot) and even a meta-language for developing human-computer learning interactions. However, the usage was small. People handbuilt things like games (e.g. Robot Odyssey and SnooperTroops), though a few companies arose to do this systematically.

As technology changed, so to did the platforms. Video discs and Computer- and then Web-Based Training emerged. Companies emerged to do them at scale, but things were changing rather fast. Flash came about as a web-based lingua franca, where programs could run in most browsers with a plug in.  And, specifically for learning, Authorware became a powerful tool.

Still as things changed quickly, most solutions were driven by a real need, and hand-crafting was the norm.  But, of course, this changed.

With the horrors of 9/11, travel went from an increasingly affordable luxury to undesirable. The demand came for ‘elearning’, reducing costs from travel and overhead. With it came tools that made it easy to take content, add a quiz, and pop it up on a screen. A shift came from quality to quantity.

And this has continued in many guises. The difference, I  hope, is that the pendulum is swinging back.  The signs I see are an increase in interest in learning science. Several contributions may come from the Guild’s DemoFest, Julie Dirksen’s Design for How People Learn, Will Thalheimer’s Debunker Club, and the Serious eLearning Manifesto. We’re learning more about good design, and more people are picking up on it. We’re talking learning experience design, integrating learning science with engagement.

If you look at other industries like automobiles, we went from craft to commercial (c.f. assembly line manufacturing). While we’re unlikely to go back to fully crafted, owing to safety regulations, we’re seeing more options for establishing individual representation. And in furniture and clothing we’re seeing more craft.

The quality is important, and if we swing back to craft now, maybe when we swing back we’ll be commercial reflecting learning quality, not expediency. In some sense it doesn’t matter between craft and commercial, as long as it’s good. And hopefully that becomes a defining characteristic of our industry. Fingers crossed!

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok