Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Direct Instruction or Guided Discovery

16 July 2019 by Clark 2 Comments

Recently, colleague Jos Arets of the 70:20:10 institute wrote a post promoting evidence-based work. And I’m a big fan, both of his work and the post. In the post, however, he wrote one thing that bugs me. And I realize I’m flying in the face of many august folks on whether to promote direct instruction or guided discovery. So let me explain myself ;).

It starts with a famous article by noted educational researchers Paul Kirschner, John Sweller, and Richard Clark. In it, they argue against “constructivist, discovery, problem-based, experiential, and inquiry-based teaching”. That’s a pretty comprehensive list. Yet these are respected authors; I’ve seen Richard Clark talk, have talked with John Sweller personally, and have interacted with Paul Kirschner online. They’re smart and good folks committed to excellent work. So how can I quibble?

First, it comes from their characterization of the opposition as ‘minimally guided’.Way back in 1985, Wallace Feurzig was talking about ‘guided discovery’, not pure exploration. To me, that’s a bit of a ‘straw man’ argument. Not minimally guided, but appropriately guided, would seem to me to be the appropriate approach.

Further, work by David Jonassen for one, and a meta-analysis conducted by Stroebel & Van Barneveld for another, suggested different outcomes. The general outcome is problem-based (as one instance being argued against) doesn’t yield  quite as good performance on a subsequent test, but is retained longer  and transfers better. And those, I suggest, are the goals we  should care about.  Similarly, research supports attempting to solve problems even if you can’t before you learn.

And I worry about the phrase “direct instruction”. That easy to interpret as ‘information dump and knowledge test’; it sounds like the old ‘error-free learning’! I’m definitely  not accusing those esteemed researchers of implying that, but I am afraid that under informed instructors could take that implication. It’s all too easy to see too much of that in classrooms. Teacher strategies tend to ignore results like spaced, varied, and deliberate practice. Similarly, the support for students to learn effective study skills is woeful.

Is there a reconciliation? I suggest there is. Professors Kirschner, Sweller, & Clark would, I suggest, expect sufficient practice to a criteria, and that the practice should match the desired performance. I suspect they want learners solving meaningful problems in context, which to me  is problem-based learning. And their direct instruction would be targeted feedback, along with models and examples. Which is what I strongly suggest. The more transfer you need, however, the broader contexts you need. Similarly, the more flexible application required would suggest the gradual removal of scaffolding.

So I really think that guided exploration, and meaningful direct instruction, will converge in what eventuates in practice. Look,  insufficiently guided practice isn’t effective, and I suspect that they wouldn’t suggest that bullet points are effective instruction. I just want to ensure that we focus on the important elements, e.g. what we highlighted in the Serious eLearning Manifesto. There  is a reason to think that direct instruction or guided discovery isn’t the dichotomy proposed, I’ll suggest. FWIW.

Dimensions of difficulty

11 July 2019 by Clark 1 Comment

As one of the things I talk about, I was exploring the dimensions of difficulty for performance that guide the solutions we should offer.  What determines when we should use performance support, automate approaches, we need formal training, or a blend, or…?  It’s important to have criteria so that we can make a sensible determination. So, I started trying to map it out. And, not surprisingly, it’s not complete, but I thought I’d share some of the thinking.

So one of the dimensions is clearly complexity.  How difficult is this task to comprehend? How does it vary? Connecting and operating a simple device isn’t very complex. Addressing complex product complaints can be much more complex. Certainly we need more support if it’s more complex. That could be trying to put information into the world if possible. It also would suggest more training if it  has to be in the head.

A second dimension is frequency of use. If it’s something you’ll likely do frequently, getting you up to speed is more important than maintaining your capability. On the other hand, if it only happens infrequently, it’s hard to try to keep it in the head, and you’re more likely to want to try to keep it in the world.

And a third obvious dimension is importance. If the consequences aren’t too onerous if there are mistakes, you can be more slack. On the other hand, say if lives are on the line, the consequences of failure raise the game. You’d like to automate it if you could (machines don’t fatigue), but of course the situation has to be well defined. Otherwise, you’re going to want lot of training.

And it’s the interactions that matter. For instance, flight errors are hopefully rare (the systems are robust), typically involve complex situations (the interactions between the systems mean engines affect flight controls), and have big consequences!  That’s why there is a huge effort in pilot preparation.

It’s hard to map this out. For one, is it just low/high, or does it differentiate in a more granular sense: e.g. low/medium/high?  And for three dimensions it’s hard to represent in a visually compelling way. Do you use two (or three) two dimensional tables?

Yet you’d like to capture some of the implications: example above for flight errors explains much investment. Low consequences suggest low investment obviously. Complexity and infrequency suggest more spacing of practice.

It may be that there’s no  one answer. Each situation will require an assessment of the mental task. However, some principles will overarch, e.g. put it in the world when you can. Avoiding taxing our mental resources is good. Using our brains for complex pattern matching and decision making is likely better than remembering arbitrary and rote steps. And, of course, think of the brain and the world as partners, Intelligence Augmentation, is better than just focusing on one or another. Still, we need to be aware of, and assessing, the dimensions of difficulty as part of our solution.  Am I missing some? Are you aware of any good guides?

Engaging Learning and the Serious eLearning Manifesto

9 July 2019 by Clark Leave a Comment

Way back in ’05, my book on games for learning was published. At its core was an alignment between what made an effective education practice and what makes engaging experiences. There were nine elements that characterized why learning should be ‘hard fun’.  More recently, we released the Serious eLearning Manifesto. Here we had eight values that differentiated between ordinary elearning and  serious elearning. So, the open question is how do these two lists match up? What is the alignment between Engaging Learning and the Serious eLearning manifesto?

The elements of the Serious eLearning Manifesto (SeM) are pretty straightforward. They’re listed as:

  • performance focused
  • meaningful to learners
  • engagement driven
  • authentic contexts
  • realistic decisions
  • real-world consequences
  • spaced practice
  • individualized challenges

The alignment (EEA: Effectiveness-Engagement Alignment) I found in Engaging Learning was based upon research I did on designing games for learning. I found elements that were repeated across proposals for effective education practice, and ones that were stipulated for engaging experiences. And I found a perfect overlap. Looking for a resolution between the two lists of elements looks something like:

  • clear goals
  • balanced challenge
  • context for the action
  • meaningful to domain
  • meaningful to learner
  • choice
  • active
  • consequences
  • novelty

And, with a little wordsmithing, I think we find a pretty good overlap!  Obviously, not perfect, because they have different goals, but the important elements of a compelling learning experience emerge.

I could fiddle and suggest that clear goals are aligned to a performance focus, but instead that’s coming from making their learning be meaningful to the domain. I suggest that what really matters to organizations will be the ability to  do, not know.  So, really, the goals are implicit in the SeM; you shouldn’t be designing learning  unless you have some learning goals!

Then, the balanced challenge is similar to the individualized challenge from the SeM. And context maps directly as well. As do consequences. And meaningfulness to learners. All these directly correspond.

Going a little further, I suggest that having choice (or appearance thereof) is important for realistic decisions. There should be alternatives that represent misconceptions about how to act. And, I suggest that the active focus is part of being engaging. Though, so too could novelty be. I’m not looking at multiple mappings but they would make sense as several things would combine to make a performance focus, as well as realistic decisions.

Other than that, on the EEA side the notion of novelty is more for engaging experiences than necessarily specific to serious elearning.  On the SeM side, spaced practice is unique to learning. The notion of a game implies the ability for successful practice, so it’s implicit.

My short take, through this exercise, is to feel confident in both recommendations. We’re talking learning experience design here, and having the learning combine engagement as well is a nice outcome. I note that I’ll be running a Learning Experience Design workshop at DevLearn in October in Las Vegas, where’ll we’ll put these ideas to work. Hope to see you there!

Graham Roberts #Realities360 Keynote Mindmap

26 June 2019 by Clark Leave a Comment

Graham Roberts kicked off the 2nd day of the Realities 360 conference talking about the Future of Immersive Storytelling. He told about their experiences and lessons building an ongoing suite of experiences. From the first efforts through to the most recent it was insightful. The examples were vibrant inspirations.

Cognition external

12 June 2019 by Clark Leave a Comment

reading outsideI was thinking a bit about distributed cognition, and recognized that there as a potentially important way to tease that apart. And I’ll talk it out first here, and maybe a diagram will emerge. Or not. The point is to think about how external tools can augment our thinking. Or, really, a way that at least partly, we have cognition external.

The evidence says that our thinking isn’t completely in our head. And I’ve suggested that that makes a good case for performance support. But I realize it goes further in ways I’ve thought about it elsewhere. So I want to pull those together.

The alternative to performance support, a sort of cognitive scaffolding, is to think about representation. Here we’re not necessarily supporting any particular performance, but instead supporting developing thinking. I shared Jane Hart’s diagram yesterday, and I know that it’s a revision of a prior one. And that’s important!

The diagram is capturing her framework, and such externalizations are a way to share; they’re a social as well as artifactual sharing. It’s part of a ‘show your work‘ approach to continuing to think. Of course, it doesn’t have to be social, it can be personal.

So both of these forms of distributed cognition are externalizing our thinking in ways that our minds have trouble comprehending. We can play around with relationships by spatially representing them. We can augment our cognitive gaps both formally through performance support, and informally by supporting externalizing our thinking.  Spreadsheets are another tool to externalize our thinking. So, too, for that matter, is text.

So we can augment our performance, and scaffold our thinking. Both can be social or solitary, but they both qualify as forms of distributed cognition (beyond social). And, importantly, both then should be consciously considered in thinking about revolutionizing L&D. We should be designing for cognition external.  The tools should be there, and the facilitation, to use either when appropriate. So, think distributed, as well as situated, and social. It’s how our brains work, we ought to use that as a guide. You think?

Labels for what we do

4 June 2019 by Clark 5 Comments

Several labelsOf late there’s been a resurrection of a long term problem. While it’s true for our field as a whole, it’s also true for the specific job of those who design formal learning. I opined about the problem of labels for what we do half a year ago, but it has raised its head again. And this time, some things have been said that I don’t fully agree with. So, it’s time again to weigh in again.

So, first, Will Thalheimer wrote a post in which he claims to have the ultimate answer (in his usual understated way ;). He goes through the usual candidates of labels for what we do – instructional designer, learning designer, learner experience designer – and finds flaws.

And I agree with him on learning designer and instructional designer. We can’t actually design learning, we can only create environments where learning can happen. It’s a probabilistic game. So learning designer is out.

Instructional designer, then, would make sense, but…it’s got too much baggage.  If we had a vision of instruction that included the emotional elements – the affective and conative components – I could buy it. And purists will say they do (at least, ones influenced by Keller). But I will suggest that the typical vision is of a behavioristic approach. That is, with a rigorous focus on content and assessment, and less pragmatic approaches to spacing and flexibility.

He doesn’t like learning engineer for the same reason as learning designer: you can’t ‘engineer’ learning. I don’t quite agree. One problem is that right now there are two interpretations of learning engineer. My original take on that phrase was that it’s about applying learning science to real problems. Just as a civic engineer applies physics…and I liked that. Though, yes, you can lead learners to learning, but you can’t make them think.

However, Herb Simon’s original take (now instantiated in the IEEE’s initiative on learning engineering) focused more on the integration of learning science with digital engineering. And I agree that’s important, but I’m not sure one person needs to be able to do it all. Is the person who engineers the underlying content engine the same one as the person who designs the experiences that are manifest out of that system? I think the larger picture increasingly relies on teams. So I’m taking that out of contention for now.

Will’s answer: learning architect. Now, in my less-than-definitive post last year, I equated learning experience designer and learning architect, roughly. However, Will disparages the latter and heaps accolades on the former. My concern is that architects design a solution, but then it gets not only built by others, but gets interior designed by others, and… It’s too ‘hands off’!  And as I pointed out, I’ve called myself that recently, but in that role I may have been more an architect ;).

His argument against learning experience designer doesn’t sit well with me. Ignoring the aspersions cast against those who he attributes the label to, his underlying argument is that just designing experiences isn’t enough. He admits we can’t ensure learning, but suggests that this is a weak response. And here’s where I disagree. I think the inclusion of experience does exactly what I want to focus on: the emotional trajectory and the motivational commitment. Not to the exclusion of the learning sciences, of course. AND, I’d suggest, also recognizing that the experience is  not an event, but an extended set of activities. Specifically, it will be across technologies as needed.

The problem, as Jane Bozarth raised in a column, is more than just this, however. What research into the role shows is that there are just too many jobs being lumped under the label (whatever it is). Do you develop too? Do you administer the LMS? The list goes on.

I think we need to perhaps have multiple job titles. We can be an instructional designer, or a learning experience designer, or an instructional technologist. Or even a learning engineer (once that’s clear ;). But we need to keep focused, and as Jane advised, not get too silly (wizard?). It’s hard enough as it is to describe what we do without worrying about labels for it. I think I’ll stick with learning experience designer for now. (Not least because I’m running a workshop on learning experience design at DevLearn this fall. ;) That’s my take, what’s yours?

New reality

22 May 2019 by Clark Leave a Comment

I’ve been looking into ‘realities’ (AR/VR/MR) for the upcoming Realities 360 conference (yes, I’ll be speaking). And I found an interesting model that’s new to me, and of course prompts some thoughts. For one, there’s a new reality that I hadn’t heard of!  So, of course, I thought I’d share.

A diagram from reality, through augmented reality and augmented virtuality, to virtual reality.The issue is how do AR (augmented reality) and VR (virtual reality) relate, and what is MR (mixed reality). The model I found (by Milgram, my diagram slightly relabels) puts MR in the middle between reality and virtual reality. And I like how it makes a continuum here.

So this is the first I have heard of ‘augmented virtuality’ (AV). AR is the real world with some virtual scaffolding. AV has more of the virtual world with a little real world scaffolding. A virtual cooking school in a real kitchen is an example. The virtual world guides the experience, instead of the real world.

The core idea to me is about story. If we’re doing this with a goal, what is the experience driver? What is pushing the goal? We could have a real task that we’re layering AR on top of to support success (more performance support than learning). In VR, we totally have to have a goal in the simulated world. AV strikes me as something that has a virtual world created story that uses virtual images and real locations. Kind of like The Void experience.

This reminded me of the Augmented Reality Games (ARGs) that were talked about quite a bit back in the day. They can be driven by media, so they’re not necessarily limited to locations. A colleague had built an engine that would allow experiences driven by communications technologies: text messages, email, phone calls, and these days we could add in tweets and posts on social media and apps. These, on principle, are great platforms for learning experiences, as they’re driven by the tools you’d actually use to perform. (When I asked my colleagues why they think they’ve ‘disappeared’, the reason was largely cost; that’s avoidable I believe.)

I like this continuum, as it puts ARGs and VR and AR in a conceptually clear framework. And, as I argue for extensively, good models give us principled bases for decisions and design. Here we’ve got a way to think about the relationship between story and technology that will let us figure out what makes the best approach for our goals. This new reality (and the others) will be part of my presentation next month. We’ll see how it manifests by then ;).

Learning Lessons

16 May 2019 by Clark 1 Comment

Designing mLearning bookSo, I just finished teaching a mobile learning course online for a university. My goal was not to ‘teach’ mobile so much as develop a mobile mindset. You have to think differently than what the phrase ‘mobile learning’ might lead you to think. And, not surprisingly, some things went well, and some thing didn’t. I thought I’d share the learning lessons, both for my own reflection, and for others.

As a fan of Nilson’s Specifications Grading, I created a plan for how the assessment would go. I want lots of practice, less content. And I do believe in checking knowledge up front, then having social learning, and a work product. Thus, each week had a repeated structure of each element. It was competency based, so you either did it or not. No aggregation of points, but instead: you get this grade if you do: this many assignments correct, and  write a substantive comment in a discussion board  and comment on someone else’s this many times,  and complete this level on this many knowledge checks. And I staggered the deadlines through the week, so there’d be reactivation. I’ve recommended this scheme on principle, and think it worked out good in practice, and I’d do it again.

In many ways it ‘teacher proofs’ the class. For one, the students are giving each other feedback in the discussion question. The choice of discussion question and assignment both were designed to elicit the necessary thinking, which makes the marking of the assignment relatively easy. And the knowledge checks set a baseline background. Designing them all as scenario challenges was critical as well.

And I was really glad I mixed things up.  In early weeks, I had them look at apps or evaluated ones that they liked. For the social week, I had them collaborate in pairs. In the contextual week, they submitted a video of themselves. They had to submit an information architecture for the design week. And for the development week, they tested it.  Thus, each assignment was tied to mobile.

It was undermined by a couple of things. First, the LMS interfered. I wrote careful feedback for each wrong answer for each question on the knowledge checks. And, it turns out, the students weren’t seeing it!  (And they didn’t let me know ’til the 2nd half of the abbreviated semester!) There’s a flag I wasn’t setting, but it wasn’t the default!  (Which was a point I then emphasized in the design week: start with good defaults!)

And, I missed making the discussions ‘gradeable’ until late because of another flag. That’s at least partly on me. Which meant again they weren’t getting feedback, and that’s not good. And, of course, it wasn’t obvious ’til I remedied it. Also, my grading scheme doesn’t fit into the default grading schema of the LMS anyways, so it wasn’t automatically doable anyways. Next time, I would investigate that and see if I could make it more obvious. And learn about the LMS earlier. (Ok, so I had some LMS anxiety and put it off…)

With 8 weeks, I broke it up like this:

  1. Overview: mobile is  not courses on a phone. The Four C’s.
  2. Formal Learning:  augmenting learning.
  3. Performance Support: mobile’s natural niche
  4. Social: connecting to people ‘on the go’
  5. Contextual: the unique mobile opportunity
  6. Design: if you get the design right…
  7. Development: practicalities and testing.
  8. Strategy: platform and policy.

And I think this was the right structure. It naturally reactivated prior concepts, and developed the thinking before elaborating.

For the content, I had a small set of readings. Because of a late start, I only found out that I couldn’t use my own mLearning book when the bookstore told me it was out of print (!). That required scrambling and getting approval to use some other writings I’d done. And the late start precluded me from organizing other writings. No worries, minimal was good.  And I wrote a script that covered the material, and filmed myself giving a lecture for each week. Then I also provided the transcript.

The university itself was pretty good. They capped the attendance at 20. This worked really well. (Anything else would’ve been a deal breaker after a disaster many years ago when an institution promised to keep it under 32 and then gave me 64 students.)  And there was good support, at least during the week, and some support was available even over the weekend.

Overall, despite some hiccups and some stress, I think it worked out (particularly under the constraints). Of course, I’ll have to see what the students say. One other thing I’d do that I didn’t do a good job of generally (I did with a few students) was  explain  the pedagogy. I’ve learned this in the past, and I should’ve done so, but in the rush to wrestle with the systems, it slipped through the cracks.

Those are my learning lessons. I welcome  your feedback and lessons!

Shaming, safety, & misconceptions

14 May 2019 by Clark 1 Comment

Another twitter debate, another blog post. As an outgrowth of a #lrnchat debate, a discussion arose around whether making errors in learning could be a source of shaming. This wasn’t about the learners, however, being afraid of being shamed. Instead it was about whether the designers would feel proscribed from  making real errors because of their expectation of learner’s emotions. And, I have strong beliefs about why this is an important issue. Learners should be making errors, for important reasons. So, we need to make it safe!

The importance of errors is in the fact that we’d rather make them in practice than when it counts. Some have argued that we literally  have to fail to be ready to learn. (Perhaps almost certainly if the learners are overconfident.) The importance to me is in misconceptions. Our errors don’t tend to be random (there is some randomness), but instead are patterned. They come from systematic ways of perceiving the situation that are wrong. They come from bringing in the wrong models in ways that seem to make sense. And it’s best to address them by being able to make that choice, and getting feedback about why that’s wrong.

Which means learners  will have to fail. And they should be able to make mistakes. (Guided) Exploration is good. Learners should be able to try things out, see what the consequences are, and then try other approaches. It shouldn’t be a free-for-all, since learners can not explore systematically. Instead, as I’ve said, learning  should be designed action and guided reflection. And that means we should be designing in these alternatives to the right action as options, and provide specific feedback.

So, if they’re failing, is that shaming? Not if we do it right. It’s about making failing  okay.  It’s about making the learning experience ‘safe‘. Our feedback should be about the decision, and why it’s wrong (referring to the model). We might not give them the right answer, if we want them to try again. But we don’t make it personal, just like good coaching. It’s about what they did, not who they are. So our design should prevent shaming, but by making it safe to fail, not preventing failure.

The one issue that emerged was that there was fear that the designers (or other stakeholders) might have fear that this could be emotionally damaging, perhaps from fears of their own. Er, nope! It’s about the learning, and we know what research tells us works. We have to be responsible to be willing to do what’s right, as challenging as that may be for any reason. Time, money, emotions, what have you. Because, if we want to be responsible stewards of the resources entrusted to us, we should be doing what’s known to be right. Not chasing shiny objects. (At least, until we get the core right. ;)

So, let’s not shame ourselves by letting irrelevant details cloud our judgment. Do the right thing. For the right reasons. We know how to be serious about our learning. Make it so.

Competencies for L&D Processes?

1 May 2019 by Clark Leave a Comment

We have competencies for people. Whether it’s ATD, LPI, IBSTPI, IPL, ISPI, or any other acronym, they’ve got definitions for what people should be able to do. And it made me wonder, should there be competencies for processes as well? That is, should your survey validation process, or your design process, also meet some minimum standards?  How about design thinking? There are things you  do  get certified in, including such piffle as MBTI and NLP.  So does it make sense to have processes meet minimum standards?

One of the things I do is help orgs fine-tune their design processes. When I talk about deeper elearning, or we take a stand for serious elearning, there are nuances that make a difference. In these cases, I’m looking for the small things that will have the biggest impact. It’s not  about trying to get folks to totally revamp their processes (which is a path to failure).  Yet, could we go further?

I was wondering whether we should certify processes. Certainly, that happens in other industries. There are safety processes in maintenance, and cleanliness in food operations, and so on. Could and should we have them for learning? For performance consulting, instructional design, performance support design, etc?

Could we state what a process should have as a minimum requirement? Certain elements, at least, at certain way points? You could take Michael Allen’s SAM and use it as a model, for instance. Or Cathy Moore’s Action Mapping. Maybe Julie Dirksen’s Design For How People Learn could be created as such. The point being that we could stipulate some way points in design that would be the minimum to be counted as sufficient for learning to occur. Based upon learning science, of course. You know, deliberate and spaced practice, etc.

Then the question is, should we? Also, could we agree? Or, of course, people could market alternative process certifications. It appears this is what Quality Matters does, for instance, at least K12 and higher ed. It appears IACET does this for continuing education certification. Would an organization certification matter? For customers, if you do customer training? For your courses, if you provide them as a product or service? Would anyone care that you meet a quality standard?

And it could go further. Performance support design, extended learning experience design (c.f. coaching), etc.  Is this something that’s better at the person level than the process level?

Should there be certification for compliance with a competency about the quality of the learning design process? Obviously in some areas. The question is, does it matter for regular L&D? On one hand, it might help mitigate against the info dump/knowledge test courses that are the bane of our industry. On the other hand, it might be hard to find a workable definition that could suit the breadth of ways in which people meet learning needs.

All I know is that we have standards about a lot of things. Learning data interchange. Individual competencies. Processes in education. Can and should there be for L&D processes? I don’t know. Seriously. I’m just pondering. I welcome your thoughts.

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.