Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Search Results for: align

Align, deepen, and space

8 July 2014 by Clark 1 Comment

I was asked about, in regards to the Serious eLearning Manifesto, about how people could begin to realize the potential of eLearning.  I riffed about this once before, but I want to spin it a different way.  The key is making meaningful practice.  And there are three components: align it, deepen it, and space it.

First, align it. What do I mean here?  I mean make sure that your learning objective, what they’re learning, is aligned to a real change in the business. Something you know that, if they improve, it will have an impact on a measurable business outcome.  This means two things, underneath. First, it has to be something that, if people do differently and better, it will solve a problem in what the organization is trying to do.  Second, it has to be something learning benefits from.  If it’s not a case where it’s a cognitive skill shift, it should be about using a tool, or replaced with using a tool. Only use a course when a course makes sense, and make sure that course is addressing a real need.

Second, deepen it.  Abstract practice, and knowledge test are both less effective than practice that puts the learner in a context like they’ll be facing in the workplace, and having them make the same decisions they’ll need to be making after the learning experience.  Contextualize it, and exaggerate the context (in appropriate ways) to raise the level of interest and importance to be closer to the level of engagement that will be involved in live performance.  Make sure that the challenge is sufficient, too, by having alternatives that are seductive unless you really understand. Reliable misconceptions are great distractors, by the way.  And have sufficient practice that leads from their beginning ability to the final ability they need to have, and so that they can’t get it wrong (not just until they get it right; that’s amateur hour).

Here’s where the third, space it, can come in.  Will Thalheimer has written a superb document (PDF) explaining the need for spacing. You can space out the complexity of development, and sufficient practice, but we need to practice, rest (read: sleep), and then practice some more. Any meaningful learning really can’t be done in one go, but has to be spread.  How much? As Will explains, that depends on how complex the task is, and how often the task will be performed and the gaps in between, but it’s a fair bit. Which is why I say learning  should be expensive.

After these three steps, you’ll want to only include the resources that will lead to success, provide models and examples that will support success, etc, but I believe that, regardless,  learners with good practice are likely to get more out of the learning experience than any other action you can take. So start with good practice, please!

Aligning with us

12 March 2014 by Clark Leave a Comment

The main complaint I think I have about the things L&D does isn’t so much that it’s still mired in the industrial age of plan, prepare, and execute, but that it’s just not aligned with how we think, learn, and perform, certainly not for information age organizations.  There are very interesting rethinks in all these areas, and our practices are not aligned.

So, for example, the evidence is that our thinking is not the formal logical thinking that underpins our assumptions of support.  Recent work paints a very different picture of how we think.  We abstract meaning but don’t handle concrete details well, have trouble doing complex thinking and focusing attention, and our thinking is very much influenced by context and the tools we use.

This suggests that we should be looking much more at contextual performance support and providing models, saving formal learning for cases when we really need a significant shift in our understanding and how that plays out in practice.

Similarly, we learn better when we’re emotionally engaged, when we’re equipped with explanatory and predictive models, and when we practice in rich contexts.    We learn better when our misunderstandings are understood, when our practice adjusts for how we are performing, and feedback is individual and richly tied to conceptual models.  We also learn better  together, and when our learning to learn skills are also well honed.

Consequently, our learning similarly needs support in attention, rich models, emotional engagement, and deeply contextualized practice with specific feedback.  Our learning isn’t a result of a knowledge dump and a test, and yet that’s most of what see.

And not only do we learn better together, we work better together.  The creative side of our work is enhanced significantly when we are paired with diverse others in a culture of support, and we can make experiments.  And it helps if we understand how our work contributes, and we’re empowered to pursue our goals.

This isn’t a hierarchical management model, it’s about leadership, and culture, and infrastructure.  We need bottom-up contributions and support, not top-down imposition of policies and rigid definitions.

Overall, the way organizations need to work requires aligning all the elements to work with us the way our minds operate.  If we want to optimize outcomes, we need to align both performance  and  innovation.  Shall we?

Exaggeration and Alignment

4 February 2014 by Clark Leave a Comment

In addition to my keynote and session at last week’s Immersive Learning University event, I was on a panel with Eric Bernstein, Andy Peterson, & Will Thalheimer. As we riffed about Immersive Learning, I chimed in with my usual claim about the value of exaggeration, and Will challenged me, which led to an interesting discussion and (in my mind) this resolution.

So, I talk about exaggeration as a great tool in learning design. That is, we too often are reigned in to the mundane, and I think whether it’s taking it a little bit more extreme or jumping off into a fantasy setting (which are similar, really), we bring the learning experience closer to the emotion of the performance environment (when it matters).

Will challenged me about the need for transfer, and that the closer the learning experience is to the performance environment, the better the transfer. Which has been demonstrated empirically. Eric (if memory serves) also raised the issue of alignment to the learning goals, and that you can’t overproduce if you lose sight of the original cognitive skills (we also talked about when such experiences matter, and I believe it’s when you need to develop cognitive skills).

And they’re both right, although I subsequently pointed out that when the transfer goal is farther, e.g. the specific context can vary substantially, exaggeration of the situation may facilitate transfer. Ideally, you would have practice across contexts spanning the application space, but that might not be feasible if we’re high up on the line going from training to education.

And of course, keeping the key decisions at the forefront is critical. The story setting can be altered around those decisions, but the key triggers for making those decisions and the consequences must map to reality, and the exaggeration has to be constrained to elements that aren’t core to the learning. Which should be minimized.

Which gets back to my point about the emotional side. We want to create a plausible setting, but one that’s also motivating. That happens by embedding the decisions in a setting that’s somewhat ‘larger than life’, where we’re emotionally engaged in ways consonant with the ones we will be when we’re performing.

Knowing what rules to break, and when, here comes down to knowing what is key to the learning and what is key to the engagement, and where they differ. Make sense?

Aligning coherency

2 April 2013 by Clark Leave a Comment

CoherentOrgLayers

In thinking about the coherent organization, a couple of realizations occurred to me.  One is about how those layers actually are replicated at different levels. The other is how those levels need to be aligned in the organization to the overall vision.

For one, those work teams can be at any level. There will be work teams at the level that the work gets done, but there’ll also be work teams at the management and even executive levels.  Similarly, there are communities of practice at all these levels as well.  Even the top level executives can be members of several communities, including as executives of their org, but also with their peers at other orgs.

Moreover, at each of these levels they need to be tapping into what’s happening outside the organization, and tracking the implications for what they do.  They need to feed back out as well (of course, not their proprietary information).

The two way flow of information has to be in and out as well as up and down.  Communication, for both collaboration and cooperation, is key.

CoherentOrgAlignmentA second necessary component is alignment.  Those groups, at every level, need to be working in alignment with the broader organization’s goals, and vision.  When Dan Pink talks about the elements of motivation in Drive, the 3rd element, purpose, is about knowing what you’re doing and why it’s important.  So organizations have to be clear about what they’re about, and make sure everyone knows how they fit. Then you can provide autonomy and the paths to mastery (the other two elements) and get people working from intrinsic motivation.

The integrated focus on communication and alignment are two keys to developing the ability to continually innovate, and cope in the increasing complexity which will make or break an organization.  That’s how it seems to me.

#itashare

In praise of reminders

17 June 2025 by Clark Leave a Comment

I have a statement that I actively recite to people: If I promise to do something, and it doesn’t get into a device, we never had the conversation. I’m not trying to be coy or problematic, there are sound reasons for this. It’s part of distributed cognition, and augmenting ourselves. It’s also part of a bigger picture, but here I am in praise of reminders.

Schedule by clock is relatively new from a historical perspective. We used to use the sun, and that was enough. As we engaged in more abstract and group activities, we needed better coordination. We invented clocks and time as a way to accomplish this. For instance, train schedules.

It’s an artifact of our creation, thus biologically secondary. We have to teach kids to tell time! Yet, we’re now beholden to it (even if we muck about with it, e.g. changing time twice a year, in conflict with research on the best outcomes for us). We created an external system to help us work better. However, it’s not well-aligned with our cognitive architecture, as we don’t naturally have instincts to recognize time.

We work better with external reminders. So, we have bells ringing to signal it’s time to go to another course, or to attend worship. Similar to, but different than other auditory signals (that don’t depend on our spatial attention) such as horns, buzzers, sirens, and the like. They can draw our attention to something that we should attend to. Which is a good thing!

I, for one, became a big fan of the Palm Pilot (I could only justify a III when I left academia, for complicated reasons). Having a personal device that I could add and edit things like reminders on a date/time calendar fundamentally altered my effectiveness. Before, I could miss things if I disappeared into a creative streak on a presentation, paper, diagram, etc. With this, I could be interrupted and be alerted that I had an appointment for something: call, meeting, etc. I automatically attach alerts to all my calendar entries.

Granted, I pushed myself to see just how effective I could make myself. Thus, I actively cultivated my address book, notes, and reminders as well as my calendar (and still do). But this is one area that’s really continued to support my ability to meet commitments. Something I immodestly pride myself for delivering on. I hate to have to apologize for missing a commitment! (I’ll add multiple reminders to critical things!)   Which doesn’t mean you shouldn’t, actively avoid all the unnecessary events people would like to add to your calendar, but that’s just self-preservation!

Again, reminders are just one aspect of augmenting ourselves. There are many tools we can use – creating representations, externalizing knowledge, … – but this on in particular as been a big key to improving my ability to deliver. So I am in praise of reminders, as one of the tools we can, and should, use. What helps you?

(And now I’ll tick the box on my weekly reminder to write a blog post!)

Expert in the loop

10 June 2025 by Clark Leave a Comment

A couple of recent occurrences have prodded me to think. (Dangerous, I know!). In this case, generative AI continues to generate ;) hype and concern in close to equal measure. Which means it dominates conversations, including one I had recently with Markus Bernhardt. Then, there was a post by Simon Terry that said something related that doesn’t completely align. So, some thoughts arguing to have an expert in the loop.

First, as a neighbor as well as an AI strategist of renown, I’m grateful Markus and I can regularly converse. (And usually about AI!) His depth and practical experience in guiding organizations complements my long-standing fascination with AI. One item in particular was of note. We were discussing how you need a person to vet what comes out of Generative AI. And it became clear that it can’t just be anybody. It takes someone with expertise in the area to be able to determine if what’s said is true.

That would suggest that the AI is redundant. However, there are limitations to our cognition. As I’ve recounted numerous times, technology does well what we don’t, and vice-versa. So, we use tools. One of the things we do is unconsciously forget aspects of solutions that we could benefit from. Hence, for instance, checklists. In this case, Generative AI can be a thinking partner in that it can spin up a lot of ideas. (Ignoring, for the moment, issues like intellectual property and environmental costs, of course.) They may not be all good, or even accurate, but…they may be things we hadn’t recalled or even thought of. Which would be a nice complement to our thinking. It requires our expertise, but it’s a plausible role.

Now, Simon was talking about how ‘human in the loop’ perpetuates a view of humans as cogs in a machine. And I get it. I, too, worry about having people riding herd on AI. That is, for instance, AI doing the creative work, and humans taking responsibility. That’s broken. But, having AI as a thinking partner, with a human generating ideas with AI, and taking responsibility for the accuracy as well as the creativity, doesn’t seem to be problematic. (And I may be wrong, these are preliminary thoughts!)

Still, I think that just a ‘human in the loop’ could be wrong. Having an expert in the loop, as Markus suggested, may be a more appropriate situation. He pointed out a couple of ways Generative AIs can introduce errors, and it’s a known problem. We have to have a person in the loop, but who? As I recounted recently, are we just training the AI? Still, I can see a case being made that this is the right way to use AI. Not as an agent (acting on its own, *shudder*), but as a partner. Thoughts?

What does ‘evidence-informed’ mean?

3 June 2025 by Clark Leave a Comment

We colloquially tout the Learning Development Accelerator as a society for ‘evidence-based’ practice. Or, more accurately, as ‘evidence-informed’, as Mirjam Neelen & Paul Kirschner advise us in their tome. But, what does ‘evidence-informed’ mean, in practice? Does everything you do have to align with what research tells us? What’s the practical interpretation? So, I have an admission to make.

To start, if you go to the LDA site (I just did), it says: “Explores and encourages research-aligned practices”. That is a noble goal, to be sure. Let’s be clear, however: research doesn’t cover all our particular situations. In fact, it’s unlikely to cover any of our specific situations. Much of the research we use is done on psychology undergraduates, and frequently for education purposes, e.g. K12 or higher ed. Which means it’s indicative of our general cognitive processing, but not our specific situations.

There is research on organizational learning, to be sure. It’s not always pristine laboratory conditions, as it may well be meeting real-world needs. Of course, we do see some A/B-type studies. Still, while legitimate, they’re not likely to be our particular situation. That is, our particular audience, our specific learning objectives, our timeline, our urgency, etc.

So what does one do? We must abstract the underlying principles, and reinstantiate for our circumstances. There are good overall principles, such as the benefit of generative activities and spaced retrieval practice. The nature of these, of course, such as choosing the right activities (Thiagi & Matt have a whole book on this!), and the right parameters for retrieval (we’re asking for that at Elevator9), means that we have to customize. Which means we have to test and tune. We can’t expect to get it right the first time. (Though, we’ll get better over time.)

There will be times, when we’re doing something that’s far enough away that we’re kind of making it up as we go along. (An area I love, as it requires considering all the models I’ve mentally collected over the years.) Then, we may find good examples to use as guidance. Someone’s tried something, and it worked for them. If you look at the LDA Research Checklist, for instance, you’ll see that replicated research is desirable. Well, that’s ideal. We live in the real world, however.  BTW, this is a good reason to share what you learn (you may have to anonymize it, for sure): so others benefit.

So, and this is where I make an admission, there will be times where we don’t have adequate guardrails. There are times when we have only some examples, or basically we’re wading into new areas. Then, we are free, with a caveat: we can’t do what’s been shown to be wrong. For instance, learning styles. Or attention-span of a goldfish. Or any of the other myths. My take, and I require this for LDA Press as well, is that we ask for the evidence-base, but we require that submissions not violate what’s known.

So, evidence-based, research-aligned, etc, at least means avoiding what has been shown not to work. It starts from using the best evidence-available to guide design, and then testing (which research also tells us to do!). Why? Because we get better outcomes. We do know that not following research is unlikely to have an impact. Learning design is, at core, a probabilistic game. Increasing the likelihood of a real impact should be what we’re about. Doing so on the basis of research is a faster and more reliable path to having an impact. Ultimately, the answer to the question “what does ‘evidence-informed’ mean?” is better outcomes. Who doesn’t want that?

Software engineer vs programmer

20 May 2025 by Clark Leave a Comment

A rotund little alien character, green with antennas, dressed in a futuristic space suit, standing on the ground with a starry sky behind them. If you go online, you’ll find many articles that talk about the difference in roles between software engineers and programmers. In short, the former have formal training and background. And, at least in this day and age, oversee coding from a more holistic perspective. Programmers, on the other hand, do just that, make code. Now, I served in a school of computer science for a wonderful period of my life. Granted, my role was teaching interface design (and researching ed tech). Still, I had exposure to both sides. My distinction between software engineer vs programmer, however, is much more visceral.

Early in my consulting career, I was asked to partner with a company to develop learning. The topic was project management for non-project managers. They chose me because of my game design experience as well as learning science background, The company that contracted me was largely focused on visual design. For instance, the owner also was teaching classes on that. Moreover, their most recent project was a book on the fauna of a fictitious world in the Star Wars universe (with illustrations). He also had a team of folks back in India. Our solution was a linear scenario, quite visual, set in outer space both because of experience of their team and the audience of engineers.

After the success of the project, the client came back and asked for a game to accompany the learning experience. Hey, no problem, it’s not like we’ve already addressed the learning objectives or anything! Still, I like games! This was going to be fun. So I dug in, cobbling together a game design. We used the same characters from the previous experience, but now focused on making project management decisions and dealing with different personality types (the subtext was, don’t be a difficult person to work with).

The core mechanic was:

  • choose the next project
  • assess any problem
  • find the responsible person,
  • ask (appropriately) for the fix

Of course, the various rates of problems, stage of development and therefore person, stage and scope of the project, were all going to need tuning. In addition, we wanted the first n problems to deal with good people, to master the details, before beginning to deal with more difficult personality types.

So, from my development docs, they hired a flash programmer to build the game. And…when we tried to iterate, we got more bugs instead of improvement. This happened twice. I realized the coders were hard-wiring the parameters throughout the code, which meant that if you wanted to tune a value, they had to search throughout the code to change all the values. Now, for those who know, this is incredibly bad programming. It wasn’t untoward to develop a small Flash animation, but it didn’t scale to a full game program.

We had a discussion, and they finally procured someone who actually understood the use of constants, someone with more than just a programming background. Suddenly, tweaks were returning with short-turnaround, and we could tune the experience! Thus, we were able to create a game that actually was fun. We didn’t really get to know whether it was effective, because they hadn’t set any metrics for impact, but they were happy and touted the game in several venues. We took that as a positive outcome ;).

The take-home lesson, of course, is if you need tuning (and, for anything of sufficient size and user-facing, you will), you need someone who understands proper code structures. I’ll always ask for someone who understands software engineering, not just a programmer. There’s a reason that a) they’re known as ‘cowboy coders’, and b) there’s software process! That’s my personal definition of a software engineer vs programmer, and I realize it’s out of date in this era of increasingly complex software. Still, the value of structure and process isn’t restricted to software, and is ever more important, eh?

Locus of intelligence

6 May 2025 by Clark 1 Comment

I’m not a curmudgeon, or even anti-AI (artificial intelligence). To the contrary! Yet, I find myself in a bit of a rebellion in this ‘generative‘ AI era. And I’m wondering why. The hype, of course, bugs me. But it occurs to me that a core problem may reside in where we put the locus of intelligence. Let me try to make it clear.

In the early days of the computer (even before my time!), the commands were to load memory into registers, conduct boolean operations on them, and to display the results. The commands to do so were at the machine level. We went a level above with a translation of that machine instructions into somewhat more comprehensible terms, assembly language. As we went along, we went more and more to putting the onus on the machine. This was because we had more processor cycles, better software etc. We’re largely to the point where we can stipulate what we want, and the machine will code it!

There are limits. When Apple released the Newton, they tried to put the onus on the machine to read human writing. In short, it didn’t work. Palm’s Pilots succeeded because Jeff Hawkins went for Graffiti as the language, which shared the responsibility between person and processor. Nowadays we can do speech and text recognition, but there are still limitations. Yes, we have made advances in technology, but some of it’s done by distributing to non-local machines, and there are still instances where it fails.

I think of this when I think of prompt engineering. We’ve trained LLMs with vast quantities of information. But, to get it out, you have to ask in the right way! Which seems like a case of having us adapt to the system instead of vice versa. You have to give them heaps more context than a person would need, and they still can hallucinate.

I’m reminded of a fictional exchange I recently read (of course I can’t find it now), where the AI user is being advised to know the domain before asking the AI. When the user queries why they would need the AI if they know the domain, they’re told they’re training the AI!

As people investigate AI usage, one of the results is that your initial intelligence indicates how much use you’ll get out of this version of AI. If you’re already a critical thinker, it’s a good augment. If you’re not, it doesn’t help (and may hinder).

Sure, I have problems with the business models (much not being accounted for: environmental cost, IP licensing, security, VC boosting). But I’m more worried about people depending too much on these systems without truly understanding what the limitations are. The responsible folks I know advocating for AI always suggest having a person in the loop. Which is problematic if you’re giving such systems agency; it’ll be too late if they do something wrong!

I think experimenting is fine. I think it’s also still too early to place a bet on a long-term relationship with any provider. I’m seeing more and more AI tools, e.g. content recommenders, simulation avatars, and the like. Like with the LMS, when anyone who could program a database would build one, I’m seeing everyone wanting to get in on the goldrush. I fear that many will end up losing their shirts. Which is, I suppose, the way of the world.

I continue to be a big fan of augmenting ourselves with technology. I still think we need to consider AI a tool, not a partner. It’s nowhere near being our intellectual equal. It may know more, but it still has limitations overall. I want to develop, and celebrate our intelligence. I laud our partnership with technologies that augment what we do well with what we don’t. It’s why mobile became so big, why AI has already been beneficial, and why generative AI will find its place. It’s just that we can’t allow the hype to blind us to the real locus of intelligence: us.

Intelligent Tutoring via Models

22 April 2025 by Clark Leave a Comment

Today I read that Anthropic has released Claude for Education (thanks, David ;). And, it triggered some thinking. So, I thought I’d share. I haven’t fully worked out my thoughts, so this is preliminary. Still, here’re some triggered reflections on Intelligent Tutoring via models.

intelligent tutoring system architecture, with an AI underpinning, learner, tutoring, and content model, and a user-system interface.So, as I’ve mentioned, I’ve been an AI groupie. Which includes tracking the AI and education field, since that’s the natural intersection of my interests. Way back when, Stellan Ohlsson abstracted the core elements of an intelligent tutoring system (ITS), which include a student (learner) model, a domain (expert on the content) model, and an instruction (tutoring) model. So, a student with a problem takes an action, and then we see what an expert in the domain would do. From that basis, the pedagogy determines what to do next.  They’ve been built, work in research, and even been successfully employed in the real world (see Carnegie Learning).

Now, I’ve largely been pessimistic about the generative AI field, for several reasons. These include that it’s:

  • evolutionary, not revolutionary (more and more powerful processors using slight advances on algorithms yields a quantum bump)
  • predicated on theft and damage (IP and environmental issues)
  • likely will lead to ill use (laying off folks to reduce costs for shareholder returns)
  • based upon biz models boosted by VC funds and as yet still volatile (e.g. don’t pick your long term partners yet)

Yet, I’ve been upbeat for AI overall, so it’s mostly the hype and the unresolved issues that are bugging me. So, seeing the features touted for this new system made me think of a potential way in which we might get the desired output. Which is how I (and we) should evolve.

As background, several decades back I was leading a team developing an adaptive learning system. The problem with ITS is that the content model is hard to build; they had to capture how experts reasoned in the field, and then model it through symbolic rules. In this instance I had the team focus on the tutoring model instead, and used a content model based upon learning objects with the relationships between them capturing the knowledge.  Thus, you had to be careful in the content development. (This was an approach we got running. A commercial company subsequently brought it to market successfully a decade after our project. Of course, our project was burned to the ground by greed and ego.)

So, what I realized is that, with the right constraints, you could perhaps do an intelligent tutoring system. So, first, the learner model might be primed by a pre-test, but is built by learner actions. The content model could come from training on textbooks. You could do either a symbolic processing of the prose (a task AI can do), or a machine learning (e.g. LLM) version by training. Then, the tutoring model could be symbolic, capturing the best of our rules, or trained on a (procured, not stolen) database of interventions (something Kaplan was doing, for instance). (In our system, we wrote rules, but had parameters that could be tuned by machine learning over time to get better.)

My thought was that, in short, we can start having cross-domain tutoring. We can have a good learning model, and use the auto-categorization of content. Now, this does beg the problem of knowledge versus skills, which I still worry about. (And, continue to look at.) Still, it appears that the particular solution is looking at this opportunity. I’ll be keen to see how it goes; maybe we can have learning support. If we blend this and a coaching engine…maybe the dream I articulated a long time ago might come to fruition.

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok