Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Not Working harder

2 August 2023 by Clark Leave a Comment

Seek > Sense > Share

A colleague recently suggested that I write about how I get so much done. Which is amusing to me, since I don’t think I get done much at all! Still, her point is that I turn around requests for posts the next day, generate webinars quickly, etc. So, I thought I’d talk a bit about how I work (at risk of revealing how much I, er, goof off). It’s all about not working harder! It may be that I’m not doing a lot compared to folks who work in more normal situations, but apparently at least perceived as productive.

So, as background, I have a passion for learning. I remember sitting on the floor, poring through the (diagrams in) the World Book. My folks reinforced this, in a story I think I’ve told about how the only excuse for being excused from the dinner table was looking things up. Actually, while I did well in school, it wasn’t perfect because I was learning to learn, not to do well in school. That was just a lucky side effect. I went on and got a Ph.D. in cognitive science, which I argue is the best foundation for dealing with folks. (Channeling my advisor.)

So, I’ve been lucky to have a good foundation. I do recall another story, which I may have also regaled you with. This is about my father’s friend who succeeded in a job despite having stated to the effect that if it appeared he was asleep, he was working, and he’d still do the work of two. (He did.) The point being, that taking time to learn and reflect was useful. I did the same, spending time reading magazines with my feet up on the desk in my first job out of college, but still producing good work.

That’s continued. Including through my graduate school career, academic life, workplace work, and as a consultant. The latter wasn’t my chosen approach, it was involuntary (despite appearing to be desirable). Somehow, it became a way of life. (And I’ve realized there are lots of things I wouldn’t have been able to do if I had had a real job).  What I do, regularly, are two major things which I think are key.

The first is that I continue to learn. I read (a lot). Partly it’s to stay up on the news in general, but also try to track what happens in our field. I check in on LinkedIn, largely through the folks I follow. I’ve tried to practice Harold Jarche’s PKM, as I understand it. That is, I update the folks I follow (on a variety of media), as well as media (for instance, Twitter is dwindling and I’m now more on Mastodon).

I also allow time for my thoughts to percolate. For instance, I take walks at least a couple of times a week. I can put a question or thought in my mind and head out. To capture thoughts, I use dictation in Apple’s Notes. I also read fiction and play games, to allow thoughts to ferment. (My preferred metaphor, you can also choose percolate or incubate. ;).  I even do household chores as a way to allow time to think. Basically, it looks like I’m spending a lot of time not working. Yet, this is critical to coming up with new ideas!

I also take time to organize my thoughts. Diagramming things is one way I understand them. I blog (like this), for the same reason. These are my personal processing mechanisms. When I do presentations and write articles for others, they’re the result of the time I’ve spent here. If you look at Harold’s process, I set up good feeds to ‘seek’ (and do searches as well), I process actively, through diagramming and posting, and then I share (er, through posting) and presentations and workshops and books and…

How models connect to context to make predictions.Note that it’s not about remembering rote things, but it’s about seeing how they connect. That takes time. And work. But it pays off. I’ll suggest that turning the ideas into models, connected causal stories, helps. So, it’s about understanding how things work, not just ‘knowing’ things. It’s about being able to predict and explain outcomes, not just to tout statistics and facts.

With this prep, I can put together ideas quickly. I’ve thought them through, so I have formed opinions. It’s then much easier to decide how to string them together for a particular goal. The list of things I’ve thought about continues to grow (even if I’ve forgotten some and joyfully rediscover!). I can write it out, or create a presentation, which are basically just linear paths through the connections.

How do I have time to do this? Well, I work from home, so that makes it easier. I also don’t work a regular job, and have gotten reasonably effective at using tools to get things done. For instance, I’m now using Apple’s Reminders to track ‘todos’, along with its Calendar. (I’m cheap, so I’ve used fancier tools, but have found these suffice.) Needless to say, I’m quite serious when I say “if a commitment I make doesn’t get into my device, we never had the conversation.”

Thus, it’s about working smarter. I don’t have an org, so it’s just my practices. If you saw it, you’d see that it’s bursts of productivity combined with lots of ‘down time’. That’s hard to see, as an org, yet that’s the way we work best. As we start having tools that automate more of our rote tasks, we should retain doing creative things like painting, music, and more, not relegate that to AI. Then we can start working more like the creative beings we are, and start recognizing that taking time out for the non-productive is actually more productive. That’s how we work smarter, and are not working harder.

Web 3.0 and system-generated content

20 June 2023 by Clark Leave a Comment

Not quite 15 years ago, I proposed that Web 3.0 would be system-generated content. There was talk about the semantic web, where we started tagging things, even auto-tagging, and then operating on chunks by rules connecting tags, not hard wiring. I think, however, that we’ve reached a new interpretation of Web 3.0 and system-generated content.

Back then, I postulated that Web 1.0 was producer-generated content. That is, the only folks who could put up content had all the skills. So, teams (or the rare individual) who could manage the writing, the technical specification, and the technical implementation. They had to put prose into html and then host it on the web with all the server requirements. These were the ones who controlled what was out there. Pages were static.

Then, CGIs came along, and folks could maintain state. This enabled some companies to make tools that could handle the backend, and so individuals could create. There were forms where you could type in content, and the system could handle posting it to the web (e.g. this blog!). So, most anyone could be a web creator. Social media emerged (with all the associated good and bad). This was Web 2.0, user-generated content.

I saw the next step as system-generated content. Here, I meant small chunks of (human-generated) content linked together on the fly by rules. This is, indeed, what we see in many sites. For instance, when you see recommendations, they’re based upon your actions and statistical inferences from a database of previous action. Rules pull up content descriptions by tags and present them together

There is another interpretation of Web 3.0, which is where systems are disaggregated. So, your content isn’t hosted in one place, but is distributed (c.f. Mastodon or blockchain. Here, the content and content creation are not under the control of one provider. This disaggregation undermines unified control, really a political issue with a technical solution.

However, we now see a new form of system-generated content. I’ll be clear, this isn’t what I foresaw (though, post-hoc, it could be inferred). That is, generative AI is taking semantics to a new level. It’s generating content based upon previous content. That’s different than what I meant, but it is an extension. It has positives and negatives, as did the previous approaches.

Ethics, ultimately, plays a role in how these play out. As they say, Powerpoint doesn’t kill people, bad design does. So, too, with these technologies. While I laud exploration, I also champion keeping experimentation in check. That is, nicely sandboxing such experimentation until we understand it and can have appropriate safe-guards in place. As it is, we don’t yet understand the copyright implications, for one. I note that this blog was contributing to Google C4 (according to a tool I can no longer find), for instance. Also, everyone using ChatGPT 3 has to assume that their queries are data.

I think we’re seeing system-generated content in a very new way. It’s exciting in terms of work automation, and scary in terms of the trustworthiness of the output. I’m erring on the side of not using such tools, for now. I’m fortunate that I work in a place of people paying me for my expertise. Thus, I will continue to rely on my own interpretation of what others say, not on an aggregation tool. Of course, people could generate stuff and say it’s from me; that’s Web 3.0 and system-generated content. Do be careful out there!

Grounded in practice

16 May 2023 by Clark Leave a Comment

Many years ago, I was accused of not knowing the realities of learning design. It’s true that I’ve been in many ways a theorist, following what research tells us, and having been an academic. I also have designed solutions, designed design processes, and advised orgs. Still, it’s nice to be grounded in practice, and I’ve had the opportunity of late.

So, as you read this, I’m in India (hopefully ;), working with Upside Learning. I joined them around 6 months ago to serve as their Chief Learning Strategist (on top of my work as Quinnovation, as co-director of the Learning Development Accelerator, and as advisor to Elevator9). They have a willingness to pay serious attention to learning science, which as you might imagine, I found attractive!

It’s been a lot of marketing: writing position papers and such. The good news is it’s also been about practice. For one, I’ve been running workshops for their team (such as the Missing LXD workshop with the LDA coming up in Asia-friendly times this summer). We’ve also created some demos (coming soon to a sales preso near you ;). I’ve also learned a bit about their clients and usual expectations.

It’s the latter that’s inspiring. How do we bake learning science into a practical process that clients can comprehend? We’re working on it. So far, it seems like it’s a mix of awareness, policy, and tools. That is, the design team must understand the principles in practice, there need to be policy adjustments to support the necessary steps, and the tools should support the practice. I’m hoping we have a chance to put some serious work into these in my visit.

Still, it’s already been eye-opening to see the realities organizations face in their L&D roles. It only inspires me more to fight for the changes in L&D that can address this. We have lots to offer orgs, but only if we move out of our comfort zone and start making changes. Here’s to the revolution L&D needs to have!

 

Attention is underrated

2 May 2023 by Clark 1 Comment

Attention is a complex phenomena. Thinking that we can simply address is probably naive. Worse, there is at least one pervasive myth about it. Trivial attention is probably overrated, but meaningful attention is underrated.

Attention, I’ll suggest, is how we pay conscious awareness to our thinking. We pay attention to the sensory stream that’s available, and as working memory is has limits, our attention chooses what ends up being in working memory (which is where we see conscious thought). This is the picture I paint in Learning Science for Instructional Designers,  my recent book on how we learn. That’s how I learned it in grad school, and little seemed to change that.

As an aside, I suggest that the basic human information processing loop is something that is critical to understand. This is true for learning designers, but I would suggest there’s broader applicability. Knowing how information flows:

  • from sensory store to working memory via attention
  • from working memory to long term memory via elaboration
  • back to working memory via retrieval
  • and to decision from working memory

as a simplified story, shows how humans work in many ways. It gets more complex in important ways, but this is a key basis. On top of it comes aspects of how we think, and learn, but this is the core.  It benefits anyone dealing with people, basically: UI, marketing, etc. In short, most everyone.

Recent pictures of the information processing loop suggest, however, that attention has a bigger purview. They have it influencing most of the above. Which may be more accurate, in that if you need to attend to what’s in working memory, and manage the process of attending to information while evaluating what decision to make. You must maintain conscious focus on what you want to learn.

The myth, which still persists, is that our attention span has dropped to 8 seconds. Which folks tout as less than that of a goldfish. (How do we know what the attention span of a goldfish is?) The origin of this myth came from StatBrain misinterpreting a study, and was amplified since it was published by Microsoft Canada.  Marketing, mind you, not their research group! A myth I busted in a previous book!

There is apparently some evidence that our attention span has dropped (to 4o-something seconds, not eight), but we can still disappear into movies, novels, and games for hours. I reckon it’s about how engaging it is. Which, not completely surprisingly, is the topic of my most recent book, Make It Meaningful.

So, please, avoid the myths, and learn the core. Attention is underrated, as is the whole human information processing loop. Learn it, and benefit.

Thinking artificially

21 February 2023 by Clark Leave a Comment

I finally put my mitts on ChatGPT. The recent revelations, concern, and general plethora of blather about it made me think I should at least take it for a spin around the block. Not surprisingly, it disappointed. Still, it got me thinking about thinking artificially. It also led me to a personal commitment.

What we’re seeing is a two-fold architecture. On one side is a communication engine, e.g. ChatGPT. It’s been trained to be able to frame, and reframe, text communication. On the other side, however, must be a knowledge engine, e.g. something to talk about. The current instantiation used the internet. That’s the current problem!

So, when I asked about myself, the AI accurately posited two of my books. It also posited one that as far as I know, doesn’t exist! Such results are not unknown. For instance, owing to the prevalence of the learning styles myth (despite the research), the AI can write about L&D and mention styles as a necessary consideration. Tsk!

The problem’s compounded by the fact that many potential knowledge bases, beyond the internet, have legacy problems. Bias has been a problem in human interactions, and records thereof can also therefore have bias. As I (with co-author Markus Bernhardt) have opined, there is a role for AI in L&D, but a primary one is ensuring that there’s good content for an AI engine to operate on. Another, I argue, is to create the meaningful practice that AI currently can’t, and is likely true for the foreseeable future. I also have yet to see an AI that can create a diagram (tho’ that, to me, isn’t as far-fetched, depending on the input).

I have heard from colleagues who find the existing ChatGPT very valuable. However, they don’t take what it says as gospel, instead they use it as a thinking partner. That is, they’ll prompt it with thoughts they’re having to see what comes up. The goal is to get some lateral input to consider (not take as gospel). It’s a way to consider ideas they may have missed or not seen, which is a valuable role.

At this point, I may or may not use AI in this way, as a thinking (artificially) partner. I’ll have to experiment. One thing I can confidently assert is that everything you read (e.g. here) that is truly from me (i.e. there’s the possibility I will be faked ) will be truly from me. I’m immodest enough to think that my writing is not in need of artificial enhancement. I may be wrong, but that’s OK with me. I hope it is with you, too!

Hyping the news

31 January 2023 by Clark Leave a Comment

I just saw another of these ‘n things you must…if you…’ headlines, and as usual it had the opposite effect they intended. I guess I’m a contrarian, because such headlines to me are an immediate warning. It happened to be in an area I know about, and I hadn’t done any of the necessary things. Yet, I have done the thing they were saying needed the prerequisites. Arguably well (do awards count?). It made me reflect on how we’re hyping the news. Some thoughts…

Yes, I know that such headlines are clickbait. ‘n‘ should be small. Yet when I tried to boil down Upside’s ‘deeper learning’ list for an infographic, it came to 14 items.  Inconvenient for hype,  I’m afraid, but what I’d put in the white paper. Of course there’s more, but I’m trying to be comprehensive, not ‘attractive’.  Similarly, when I created my EEA alignment, I had nine elements. Not because they were convenient for marketing, but because that’s what emerged from the work.

I similarly see lists for ‘the five things’, or the ‘8 things’ (somehow 8 seems to be a maximum, at least for marketing ;). What worries me about these lists is if they’re comprehensive. Is that really all? Have you ensured that they’re necessary and sufficient? Did you even have a process? It took four of us working through months to come up with the eight elements of the Serious eLearning Manifesto.  None of the above lists (Manifesto, EEA, deeper learning) are definitive, but they are the result of substantial work and thinking. Not just pulled together for a marketing push.

There are good lists, don’t get me wrong. Ones where people have worked to try to identify critical elements, or good choices based upon principled grounds. Typically, if it’s the case, there are pointers to the basis for these claims. Either there’s someone who’s known for work in the area, or they’re transparent about process. However, there are also lists where it’s clear someone’s just pulled together some random bits. Look for inconsistency, mismatches of types, etc.

In the broader picture, it’s clear that generating fear and outrage and sensationalism sell. I just want to demonstrate a resistance, and prefer a clear argument over a rant. (Here I’m trying to do the former, not the latter. ;) This goes with probably my broader prescription: I do want policy wonks making decisions. I really don’t want simple wrong answers to complicated questions no matter how appealing.

So, my short take is if you know the area, read with a critical eye. If you don’t, look for warning signs, and see what those who do know have to say about it. Caveat emptor. That’s my take on trying to stay immune to the hyping of news.

Learners as learning evaluators

24 January 2023 by Clark 7 Comments

Many years ago, I led the learning design of an online course on speaking to the media. It was way ahead of the times in a business sense; people weren’t paying for online learning. Still, there were some clever design factors in it. I’ve lifted one to new purposes, but also have a thought about how it could be improved. So here are some thoughts on learners as learning evaluators.

The challenge is the result of two conflicting challenges. For one, we want to support free answers on the part of learners. This is for situations where there’s more than one way to respond. For example a code solution, or a proposed social response. The other is the desire for auto-marking, that is independent asynchronous learning. While it’s ideal to have an instructor in the loop to provide feedback, the asynchronous part means that’s hard to arrange. We could try to have an intelligent programmed response (c.f. artificial intelligence), but those can be difficult to develop and costly. Is there another solution?

One alternative, occasionally seen, is to have the learner evaluate their response. There are positive benefits to this, as it gets learners to become self-evaluators. One of the mechanisms to support this is to provide a model answer to compare to the learners’ own response. We did this in that long-ago project, where learners could speak their response to a question, then listen to theirs and a model response.

There are some constraints on doing this; learners have to be able to see (or hear) their response in conjunction with the model response. I’ve seen circumstances where learners respond to complex questions and get the answer, but they don’t have a basis to compare. That is, they don’t get to see their own response, and the response was complex enough not to be completely remembered. One particular instance of this is in multiple response choices where you pick a collection out.

I want to go further, however. I don’t assume that learners will be able to effectively compare their response to the model response. At least, initially. As they gain expertise, they should, but early on they may not have the requisite support. You can annotate the model answer with the underlying thinking, but there’s another option.

I’m considering the value of having an extra rubric that states what you should notice about the model answer and prompts you to see if you have all the elements. I’m suggesting that this extra support, while it might add some cognitive load to the process, also reduces the load by supporting attention to the important aspects. Also, this is scaffolding that can be gradually removed, allowing learners to internalize the thinking.

I think we can have learners as learning evaluators, if we support the process appropriately. We shouldn’t assume that ability, at least initially, but we can support it. I’m not aware of research on this, though I certainly don’t doubt it. If you do know of some, please do point me to it! If you don’t, please conduct it! :D Seriously, I welcome your thoughts, comments, issues, etc.

Debating debates

17 January 2023 by Clark Leave a Comment

This is the year, at the LDA, of unpacking thinking (the broader view of my previous ‘exposure‘). The idea is to find ways to dig a bit into the underlying rationale for decisions, to show the issues and choices that underly design decisions. How to do that? Last year we had the You Oughta Know series of interviews with folks who represent some important ideas. This year we’re trying something new, using debates to show tradeoffs. Is this a good idea? Here’s the case, debating debates.

First, showing underlying thinking is helpful. For one, you can look at Alan Schoenfeld’s work on showing his thinking as portrayed in Collins & Brown’s Cognitive Apprenticeship. Similarly, the benefits are clear in the worked examples research of John Sweller. While it’s fine to see the results, if you’re trying to internalize the thinking, having it made explicit is helpful.

Debates are a tried and tested approach to issues. They require folks to explore both sides. Even if there’s already a reconciliation, I feel, it’s worth it to have the debate to unpack the thinking behind the positions. Then, the resolution comes from an informed position.

Moreover, they can be fun! As I recalled here, in an earlier debate, we agreed to that end. Similarly, in some of the debates I had with Will Thalheimer (e.g. here), we deliberately were a bit over-the-top in our discussions. The intent is to continue to pursue the fun as well as exposing thinking. It is part of the brand, after all ;).

As always, we can end up being wrong. However, we believe it’s better to err on the side of principled steps. We’ll find out. So that’s the result of debating debates. What positions would you put up?

Don’t make me learn!

10 January 2023 by Clark 1 Comment

In a conversation with a client, the book Don’t Make Me Think was mentioned. Though I haven’t read it, I’m aware of its topic: usability. The underlying premise also is familiar: make interfaces that use pre-existing knowledge and satisficing solutions. (NB: I used to teach interface design, having studied under one of the gurus.) However, in the context of the conversation, it made me also ponder a related topic: “don’t make me learn”. Which, of course, prompted some reflection.

There are times, I’ll posit, when we don’t want employees to be learning. There are times when learning doesn’t make sense. For instance, if the performance opportunities are infrequent, it may not make sense to try to have it in people’s heads. If there’s a resource people can use to solve the problem rather, than learning, that is probably a better answer. That is, in almost any instance, if the information can be in the world, perhaps it should.

One reason for this is learning, done properly, is hard. If a solution must be ‘in the head’ – available when needed and transferring to appropriate situations – there’ll likely be a fair bit of practice required. If it’s complex, much more so. Van Merriënboer’s Four Component Instructional Design is necessarily rigorous! Thus, we shouldn’t be training unless it absolutely, positively, has to be in the head when needed (such as in life-threatening situations such as aviation and medicine).

I’m gently pushing the idea that we should avoid learning as much as possible! Make the situation solvable in some other way. When people talk about ‘workflow learning’, they say that if it takes you out of the workflow, it’s not workflow. I’ll suggest that if it doesn’t, it’s not learning. Ok, so I’m being a bit provocative, but too often we err on the side of throwing training at it, even when it’s not the best solution. Let’s aim for the reverse, finding other solutions first. Turn to job aids or community (learning can be facilitated around either, as well), but stop developing learning as a default.

So, don’t make me learn, unless I have to. Fair enough?

Looking ahead

3 January 2023 by Clark Leave a Comment

A number of people are indicating that 2022 is another year to move on from. And, of course, we do need to move on (as if there were an alternative ;). Still, 2022 was a good year for Quinnovation, and here’s hoping that continues.  Here’re some random thoughts looking ahead.

For one, I saw an interesting piece leveraging the financial adage (really: caution) that “past performance is not indicative of future results”. That comes with various investment opportunities; just because they’ve done well in the past doesn’t meant that will continue. The nice twist in the article was to apply it to yourself: if the past year wasn’t a great one, that doesn’t mean you’re going to continue to suffer. Things can get better despite what happened in the past (or worse), though of course taking your own proactive steps is recommended. Indeed, given that for me, 2020 and 2021 were slow years didn’t mean 2022 had to be. Fortunately!

In the broader sense, I think that despite some hiccups, we’re seeing positive trends. For instance, I increasingly see calls for greater attention to evidence-based practices. While that doesn’t mean it’s happening yet, but the notice is hopefully precedes implementation!

We’ve still some legacies slowing us down, of course. I do think that the belief in us as formal reasoning beings will continue to be a barrier. Still, the above clarion call should help us move (however slowly) to right that wrong.

I’m optimistic, by nature (despite being skeptical). Thus, I think we are working our way forward. I reckon I’ll keep working on that, at least. I am continuing with the Learning Development Accelerator, and Upside Learning, as well of course continuing to do Quinnovative things. I’m looking ahead to us having an impact, together!

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.