Learnlets

Secondary

Clark Quinn’s Learnings about Learning

What L&D resources do we use?

29 October 2024 by Clark 1 Comment

This isn’t a rhetorical question. I truly do want to hear your thoughts on the necessary resources needed to successfully execute our L&D responsibilities. Note that by resources in this particular case, I’m not talking: courses, e.g. skill development, nor community. I’m specifically asking about the information resources, such as overviews, and in particular tools, we use to do our job. So I’m asking: what L&D resources do we need?

A diagram with spaces for strategy, analysis, design, development, evaluation, implementation, evaluation, as well as topics of interest. Elements that can be considered to be included include tools, information resources, overviews, and diagrams. There are some examples populating the spaces.I’m not going to ask this cold, of course. I’ve thought about it a bit myself, creating an initial framework (click on the image to see it larger). Ironically, considering my stance, it’s based around ADDIE. That’s because I believe the elements are right, just that it’s not a good basis for a design process. However, I do think we may need different tools for the stages of analysis, design, development, implementation, and evaluation, even if don’t invoke them in a waterfall process. I also have categories for overarching strategy, and for specific learning topics. These are spaces in which resources can reside.

There are also several different types of resources I’ve created categories for. One is an overview of the particular spaces I indicate above. Another are for information resources, that drill into a particular approach or more. These can be in any format: text or video typically. Because I’m weird for diagrams, I have them separately, but they’d likely be a type of info resource. Importantly, one is tools. Here I’m thinking performance support tools we use: templates, checklists, decision trees, lookup tables. These are the things I’m a bit focused on.

Of course, this is for evidence-based practices. There are plenty of extant frameworks that are convenient, and cited, but not well-grounded. I am looking for those tools you use to accomplish meaningful solutions to real problems that you trust. I’m looking for the ones you use. The ones that provide support for excellent execution. In addition to the things listed above, how about processes? Frameworks? Models? What enables you to be successful?

Obviously, but importantly, this isn”t done! That is, I put my first best thoughts out there, but I know that there’s much more. More will come to me (already has, I’ve already revised the diagram a couple of times), but I’m hoping more will come from you too. That includes the types of resources, spaces, as well as particular instances.

The goal is to think about the resources we have and use. I welcome you putting in, via comments on the blog or wherever you see this post, and let me know which ones you find to be essential to successful execution. I’d really like to know what L&D resources do we use. Please take a minute or two and weigh in with your top and essential tools. Thanks!

Learning Science Conference 2024

15 October 2024 by Clark Leave a Comment

I believe, quite strongly, that the most important foundation anyone in L&D can have is understanding how learning really works. If you’re going to intervene to improve people’s ability to perform, you ought to know how learning actually happens! Which is why we’ve created the Learning Science Conference 2024.

We have some of the most respected translators of learning science research to practice. Presenters are Ruth Clark, Paul Kirschner, Will Thalheimer, Patti Shank, Nidhi Sachdeva, as well as Matt Richter and myself. They’ll be providing a curated curriculum of sessions. These are admittedly some of our advisors to the Learning Development Accelerator, but that’s because they’ve reliably demonstrated the ability to do the research, and then to communicate the results of theirs and others’ work in terms of the implications for practice. They know what’s right and real, and make that clear.

The conference is a hybrid model; we present the necessary concepts asynchronously, starting later this month. Then from 11- 15 November, we’ll have live online sessions led by the presenters. These are at two different times to accommodate as much of the globe as we can! In these live sessions we’ll discuss the implications and workshop issues raised by attendees. We will record the sessions in case you can’t make it. I’ll note, however, that participating is a chance to get your particular questions answered! Of course, we’ll have discussion forums too.

We’ve worked hard to make this the most valuable grounding you can get, as we’ve deliberately chosen the topics that we think everyone needs to comprehend. I suggest there’s something there for everyone, regardless of level. We’re covering the research and implications around the foundations of learning, practices for design and evaluation, issues of emotion and motivation, barriers and myths, even informal and social learning. It’s the content you need to do right by your stakeholders.

Our intent is that you’ll leave equipped to be the evidence-based L&D practitioner our industry needs. I hope you’ll take advantage of this opportunity, and hope to see you at the Learning Science Conference 2024.

Simple Models and Complex Problems

8 October 2024 by Clark Leave a Comment

I’m a fan of models. Good models that are causal or explanatory can provide guidance for making the right decisions. However, there are some approaches that are, I suggest, less than helpful. What makes a good or bad model? My problem is about distinguishing when to talk about each: simple models and complex problems.

A colleague of ours sent me an issue of a newsletter (it included the phrase ‘make it meaningful‘ ;). In it, the author was touting a four letter acronym-based model. And, to be fair, there was nothing wrong with what the model stipulated. Chunking, maintaining attention, elaboration, and emotion are all good things. What bothered me was that these elements weren’t sufficient! They covered important elements, but only some. If you just took this model’s advice, you’d have somewhat more memorable learning, but you’d fall short on the real potential impact. For instance, there wasn’t anything there about the importance of contextualized practice nor feedback. Nor models, for that matter!

I’m not allergic to n letter acronym models. For instance, I keep the coaster I was given for Michael Allen’s CCAF on my desk. (It’s a nice memento.) His Context-Challenge-Activity-Feedback model is pretty comprehensive for the elements that a practice has to have (not surprisingly). However, learning experiences need more than just practice, they need introductions, and models, and examples and closings as well as practice. And while the aforementioned elements are necessary, they’re not sufficient. Heck, Gagné talked about nine elements.

What I realize as I reflect is that I like models that have the appropriate amount of complexity for the level of description they’re talking about. Yet I’ve seen far too many models that are cute (some actually spell words) and include some important ideas but they’re not comprehensive for what they cover. The problem, of course, is that you need to understand enough to be able to separate the wheat from the chaff. I’ll suggest to look to vetted models, that are supported by folks who know, and there are criticisms and accolades to accompany them. Read the criticisms, and see if they’re valid. Otherwise, the model may be useful.

Ok, one other thing bothered me. This model supposedly has support from neuroscience. However, as I’ve expressed before, there have yet to be results that aren’t already made from cognitive science research. This, to me, is just marketing, with no real reason to include it except to try to make it more trendy and appealing. A warning sign, to me at least.

Look, designing for learners is complex. Good models help us handle this complexity well. Bad ones, however, can mislead us into only paying attention to particular bits and create insufficient solutions. When you’re looking at simple models and complex problems, you need to keep an eye out for help, but maybe it needs to be a jaundiced eye.

Is “Workflow Learning” a myth?

24 September 2024 by Clark 5 Comments

There’s been a lot of talk, of late, about workflow learning. To be fair, Jay Cross was talking about learning in the flow of work way back in the late 1990s, but the idea has been recently suborned and become current. Yet, the question remains whether it’s real or a mislabeling (something I’m kind of  anal about, see microlearning). So, I think it’s worth unpacking the concept to see what’s there (and what may not be). Is workflow learning a myth?

To start, the notion is that it’s learning at the moment of need. Which sounds good. Yet, do we really need learning? The idea Jay pointed to in his book Informal Learning, was talking about Gloria Gery’s work on helping people in the moment. Which is good! But is it learning? Gloria was really talking about performance support, where we’re looking to overcome our cognitive limitations. In particular, memory, and putting the information into the world instead of in the head. Which isn’t learning! It’s valuable, and we don’t do it enough, but it’s not learning.

Why? Well, because learning requires action and reflection. The latter can just be thinking about the implications, or in Harold Jarche’s Personal Knowledge Mastery model, it’s about experimenting and representing. In formal learning, of course, it’s feedback. I’ve argued we could do that, by providing just a thin layer on top of our performance support. However, I’ve never seen same!  So,  you’re going to do, and then not learn. Okay, if it’s biologically primary (something we’re wired to learn, like speaking), you’re liable to pick it up over time, but if it’s biologically secondary (something we’ve created and aren’t tuned for, e.g. reading) I’d suggest it’s less likely. Again, performance is the goal. Though learning can be useful to support comprehending context and  making complex decisions, what we’re good at.

What is problematic is the notion of workflow and reflection in conjunction. Simply, if you’re reflecting, you’re by definition out of the workflow! You’re not performing, you’re stopping and thinking. Which is valuable, but not ‘flow’. Sure, I may be overly focused on workflow being in the ‘zone’, acting instead of thinking, but that, to me, is really the notion. Learning happens when you stop and contemplate and/or collaborate.

So, if you want to define workflow to include the reflection and thoughtful work, then there is such a thing. But I wonder if it’s more useful to separate out the reflection as things to value, facilitate, and develop. It’s not like we’re born with good reflection practices, or we wouldn’t need to do research on the value of concept mapping and sketch noting and how it’s better than highlighting. So being clear about the phases of work and how to do them best seems to me to be worthwhile.

Look, we should use performance support where we can. It’s typically cheaper and more effective than trying to put information into the head. We should also consider adding some learning content on top of performance support in times where people knowing why we’re doing it as much as what we should do is helpful. Learning should be used when it’s the best solution, of course. But we should be clear about what we’re doing.

I can see arguments why talking about workflow learning is good. It may be a way to get those not in our field to think about performance support. I can also see why it’s bad, leading us into the mistaken belief that we can learn while we do without breaking up our actions. I don’t have a definitive answer to “is workflow learning a myth” (so this would be an addition to the ‘misconceptions’ section of my myths book ;). What I think is important, however, is to unpack the concepts, so at least we’re clear about what learning is, about what workflow is, and when we should do either. Thoughts?

Diagramming Feedback

10 September 2024 by Clark 1 Comment

I’ve wrestled with the concept of feedback for a while. I think Valerie Shute’s summary she did for the ETS is superb, BTW. And, of course, I select a pragmatic subset for the purposes of communicating the essential elements. However, it’s always been a list of important items. Which isn’t how I want to do it in a webinar. I was thinking about it today, and I began to get an idea. So, I started diagramming feedback.

A person generates output, and the model is used to determine correctness or not, and then either the incorrect is shown why to be so, and in either case then the right answer. What are the essential elements of feedback? Well, it should be on the performance, not the individual. It should be model-based, in that you should be using models to explain how to perform, showing examples of the model being used in context, and then asking the learner to use them. The feedback, then, uses the model to explain why what went right, or what went wrong. Also, it should be minimal other than that.

So, here I tried to show that the individual (or group, hmm) produces output. That output is evaluated by the model to ascertain correctness, or not. (Not the individual!) If the answer’s wrong, you say why, and then the right answer. If it’s right, you just reinforce the right answer.

Of course, this representation doesn’t convey the minimal aspect. It’s also not clear about using the model in the feedback. Still, so far it’s a representation I can talk to. So, this is my first stab at diagramming feedback. I welcome same!

The Damage Done

20 August 2024 by Clark Leave a Comment

There’ve been a recent discussions about misinformation. One question is, what does it hurt? When you consider myths, superstitions, and misconceptions (the breakdown in my book on L&D problems), what can arise? Let’s talk about the damage done.

So, let’s start with myths. These, I claim, are things that have been shown not to have value by empirical research. There are studies that have examined these claims, and found them to not have data to support them. For instance, accommodating learning styles is a waste. Yes, we know people differ in learning, but we don’t have a reliable base. Moreover, people’s choices to work for (or against) their style don’t make a difference in their learning. Some of the instruments are theoretically flawed as well as psychometrically invalid.

What’s the harm? I’ll suggest several ways in which myths harm us. For one, they can cause people to spend resources (money & time)  addressing them that won’t have an impact. It’s a waste! We can also characterize people in ways that limit them; for instance if they think they learn in a particular way, they may avoid a topic or invest effort in an inappropriate way to learn it. Investing in unproven approaches also perpetuates them, propagating the beliefs to others.

Superstitions, as I define them, are beliefs nobody would claim to believe, yet somehow persist in our practices. For instance, few will claim to believe that telling is sufficient to achieve behavior change. Yet, we continue to see information presentation and knowledge test, such as “awareness” training. Why? This is a waste of effort. There aren’t outcomes from these approaches. Typically, they are legacies of expectations from previous decades, yet business practices haven’t been updated. Still, to the extent that we continue these practices, even while decrying them, we’re again wasting time and money. Maybe we tick boxes and make people happy, but we can (and should) do better.

The final category is misconceptions. These are beliefs that some hold, and others decry. They aren’t invalid, but they only make sense in certain circumstances. I suggest that those who defy them don’t have the need, and those who tout them are in the appropriate circumstance. What matters is understanding when they make sense, and then using them, or not, appropriately. If you avoid them when they make sense, you may make your life harder. If you adopt them when they’re not appropriate,  you could make mistakes or waste money.

At the end of the day, the damage done is the cost of wasting money and time. Understanding the choices is critical. To do so best, you can and should understand the underlying cognitive and learning sciences. You should also track the recognized translators of research into practice who can guide you without you having to read the original academese. To be professional in our practice, we need to know and use what’s known, and avoid what’s dubious. Please!

Failing right

13 August 2024 by Clark Leave a Comment

I’ve been reading Amy Edmondson’s Right Kind of Wrong, and I have to say it’s very worthwhile. I’ve been a fan of hers since her book Teaming introduced me to the notion of psychological safety. It’s an element I’ve incorporated into my thinking about innovation and learning. This new book talks about how we have beliefs about making mistakes, and how we can, and should, be failing right.

In this book, she uses examples to vibrantly talk about failure, and how it’s an important part of life. She goes on to talk about different types of failure, and the situations they can occur in, creating a matrix. This allows us to look at when and how to fail. Along the way, she talks about self, situational, and systemic failure.

One of the important takeaways, which echoes a point Donald Norman made in Design of Everyday Things, is that failure may not be our fault! Too often, bad design allows failure, instead of preventing it. Moreover, she makes the point that we have a bad attitude towards failure, not recognizing that it’s not only part of life, but can be valuable!  When we make a mistake, and reflect, we can learn.

Of course, there are simple mistakes. I note that there’s some randomness in our architecture, e.g. To Err is Human. But also, there can be factors we haven’t accounted for, like bad design, or things out of our control. At the most significant level, she talks about complex systems, and how they can react in unpredictable ways. Along the way, what counts as ‘intelligent’ failure is made clear. Some fails are smart, others are not justified.

She also talks about how experiments are necessary to understand new domains. This is, in my mind, about innovation. She also gives prescriptions, at both the personal and org level. Dr. Edmondson talks about the value of persistence, taking ‘good enough’, but also not taking it too personally. She also talks about sharing, as Jane Bozarth would say: Show Your Work. This is for both calling out problems and sharing failure.

Along with a minor quibble about the order in which she presents a couple things, a more prominent miss, to me, is a small shift in focus. She talks about celebrating the ‘pivot’, where you change direction. However,  I’d more specifically celebrate the learning. That is, whether we pivot or not, we say that learning something is good. Of course, I’m biased towards learning, but I’d rather celebrate the learning. Yes, we possibly would do something different, and celebrating action is good, but sharing the learning means others can learn from it too. Maybe I’m being too pedantic.

Still, this is another in her series of books exploring organizational improvement and putting useful tools into our hands. We can, and should expect to not get everything right all the time, and instead should be focusing on failing right. Recommended.

 

The easy answer

16 July 2024 by Clark Leave a Comment

In working on something, I’m looking at the likely steps people take. Of course, I’m listing them from easiest to most useful (with the hope that folks understand they should take the latter). However, it’s making me think that, too often, people are looking at the easy answer, not the most accurate one. Because they really don’t know the problem. When does the easy answer make sense? Are we letting ourselves off the hook too much?

So, for instance, in learning we really should do analysis when someone asks for something. “We need a course on X.” “Ok, what tells you that you need this, and how will we know when it’s worked?” In a quick family convo, we established that this sort of un-analytical request is made all the time:

  • “Why isn’t my plant blooming?” (It’s not the season.)
  • “Fix this code.” (The input’s broken, not the code.)
  • …

Yet, people actually don’t do this up-front analysis. Why? It’s harder, it takes more time, it slows things down, it costs more. Besides, we know what the problem is.

DivergeConvergeProblemSolutionExcept, we don’t know what the problem is. Too often, the question or request is making some assumptions about the state of the world that may not be true. It may be the right answer, but it may not. Ensuring that you’ve identified the problem correctly is the first part of the design process, and you should diverge on exploration before you converge on a solution. That’s the double diamond, where you first explore the problem, before you explore a solution.

Perhaps counter-intuitively, this is more efficient. Why? Because you’re not expending resources solving the wrong problem. Are you sure you’ve gotten it right? How do you know when to take the easier path? If you know the answer you need, you’re better equipped to choose the level of solution you need. If you don’t know the question, however, and make assumptions about the root cause, you can go off the rails. And, end up spending effort you didn’t need to.

Look, I live in the real world. I have to take shortcuts (heck, I’m lazy ;). And I do. However, I like to do that when I know the answer, and know that the outcome is good enough to meet the need. I’ll go for the easy answer, if I know it’ll solve the problem well enough. But I can’t if I don’t know the question or problem, and just assume. And we know what happens when we ass-u-me.

A Learning Science Conference?

9 July 2024 by Clark Leave a Comment

learning science conference 2024 banner: "Online. Asynchronous & Live Sessions"In our field of learning design (aka instructional design), it’s too frequently the case that folks don’t actually know the underlying learning science that guides processes, policies, and practices. Is this a problem? If it is, what is the remedy?

Consider that you wouldn’t want an electrician that didn’t understand the principles of electricity. Such a person might not understand, for instance, the importance of grounding, leavning open the possibility of burning down the house.

So, too, with learning. If you don’t understand learning science, you might not understand why learning styles is a waste of money, the lack of value of information alone, nor that you should make alternatives to the right answer reflect typical misconceptions. There’s lots more: models, context, and feedback are also included in the topics that most folks don’t understand the nuances of.

If you don’t understand learning science, you waste money. You are likely to design ineffective learning, wasting time and effort. Or you might expend unnecessary effort on things that don’t have an impact. Overall, it’s a path to the poorhouse.

Of course, there are other reasons why we don’t have the impact we should: mismatched expectations on costs and time, SME recalcitrance and hubris, and more. Still, you’re better equipped to counter these problems if you can justify your stance from sound research.

The way to address this, of course, also isn’t necessarily easy. You might read a book, though some can mislead you. And, you still don’t get answers if you have questions. Or, you could pay for a degree, but those can be quite expensive and ineffective. Too frequently they spend time on process and not enough on principles.

There’s another option, one we’re providing. What if you could get the core essentials curated for their relevance? Further, this content is provided for you asynchronously, buttressed by the opportunity for meaningful interaction, in a tight time frame (at different times depending on your location)? Then, the presentation is by some of the most important names in the field, individuals who’ve reliably demonstrated an ability to translate academic research into comprehensible principles? And, finally, this is delivered at an appropriate cost? Does that sound like a valuable proposition?

I’d like to invite you to the Learning Science Conference, put on by the Learning Development Accelerator. Faculty already agreed include Ruth Clark (co-author of eLearning & The Science of Instruction), myself (author of Learning Science for Instructional Designers), Matt Richter (co-director of the Thiagi group), and Nidhi Sachdeva (faculty at University of Toronto). The curriculum covers 9 of the most important elements of learning science including learning, myths and barriers, motivation, informal and social learning, media, and evaluation.

This event is designed to leave you with the foundations necessary to be able to design learning experiences that are both engaging and effective, as well as dealing with the expected roadblocks to success. Frankly, we see little else that’s as comprehensive and practical. We hope to see you there!

Break it down!

2 July 2024 by Clark 2 Comments

jigsaw puzzle piecesIn our LDA Forum, someone posted a question asking about taking Cathy Moore’s Action Mapping for soft skills, like improving team dynamics. Now, they’re specifically asking about a) people with experience, and b) in the context of not-for-profits, so…I’m not a good candidate to respond. However, what it does raise is a more common problem: how do you train things that are more ephemeral. Like, for instance, leadership, or communication? My short answer is “break it down”. What do I mean? Here’re some thoughts, and I welcome feedback!

Many moons ago, I co-wrote a paper on evaluating social media impacts. There are the usual metrics, like ‘engagement’. That is, are people using the system? Of course, for companies charging for their platform, this could be as infrequent as a person accessing it once a month. More practically, however, it should be a person hitting it at least several times a week, or even several times a day! If you’re communicating, cooperating, and collaborating, you really should be interacting at a fair frequency.

I, on the other hand, argued for more detailed implications. If you’re putting it into a sales team, you should expect not only messages, but more success on sales, shorter sales cycles, etc. So you can get more detailed. These days, you can do even more, and have the system actually tag what the messages are about and count them. You can go deeper.

Which is what I think is the answer here. What skills do you want? For an innovation demo with Upside Learning, I argued we should break it down. That includes how to work out loud, and how to provide feedback, and how to run group meetings. (I’m just reading Alex Edman’s May Contain Lies, and it contains a lot of details about how to consider data and evidence.) We can look for more granular evidence. Even for skills like team dynamics, you should be looking at what makes good dynamics. So, things like making it safe yet accountable, providing feedback on behavior not on the person, valuing diversity, etc. There should be specific skills you want to develop, and assess. These, then, become the skills you design your learning to accomplish. You are, basically, creating a curriculum of the various skills that comprise the aggregated topic.

It may be that you assess a priori, and discover that only some are missing in your teams. That upfront analysis should happen regardless, but is too infrequent. The interlocutor here also mentioned the audience complaining about the time for analysis. Yep, that’s a problem. Reckon you have to sell the whole package: analyzing, designing, and evaluating for impact on performance, not just some improvement. Yet, compared to throwing money away? Seems like targeting intervention efforts should be a logical sell. If only we lived in a rational world, eh?

Still, overall, I think that these broad programs break down into specific skills that can be targeted and developed. And, we should. Let’s not get away with vague intentions, explanations, and consequently no outcomes. Let’s do the work, break it down, and develop actual skills. That, at least, is my take, I welcome hearing yours!

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok