Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Writing, again

21 January 2025 by Clark Leave a Comment

So, I’m writing, again. Not a book (at least not initially ;), but something. I’m not sure exactly how it’ll manifest, but it’s emerged. Rather than share what I’m writing (too early), I’m reflecting a bit on the process.

As usual, I’m writing in Word. I’d like to use other platforms (Pages? Scrivener? Vellum?), but there are a couple of extenuating circumstances. For one, I’ve been using Word since I wrote my PhD thesis on the Mac II I bought for the purpose. I think that was Word 2.0, circa late 80’s. In other words, I’ve been using Word a long time! Then, the most important thing besides ‘styles‘ (formatting, not learning) is the ability to outline. Word has industrial-strength outlining, and, to use an over-used and over-emphatic point, I live and die by outlines.

I outline my plan before I start writing, pretty much always. Not for blog posts like this, but for anything of any real length beyond such a post. Anything with intermediate headings is almost guaranteed to be outlined. I tend to prefer well-structured narratives (at least for non-fiction?). It likely will change, of course. When my very first book was written, it pretty much followed the structure. Ever since then…  My second book had me rearranging the structure as I typed. My most recent book got restructured after every time I shared it with my initial readers, until suddenly it gelled.

In this case, and not unlike most cases, I move things around as I go. This should be a section all its own. That is superfluous to need. This other goes better here than where I originally put it. And so on. I do take a pass through to reconcile any gaps or transitions, though I try to remedy those as I go.  The goal is to do a coherent treatment of whatever the topic is.

I throw resources in as I go. That is, if I find myself referring to a concept, I put a reminder in a References or Resources section at the end to grab a reference later. I have a separate (ever-growing) file of references for that purpose. Though I may not always include the reference in the document (currently I’m trying to keep the prose lean), but I want folks to have a resource at least.

I also jump around, a bit. Mostly I proceed from ‘go to whoa’, but occasionally I realize something I want to include, and put a note at the appropriate place. That sometimes ends up being prose, until I realize I need to go back to where I was ;). I hope that it leads to a coherent flow. Of course, as above, I do reread sections, and I try to give a final read before I pass on to whatever next step is coming. Typically, that means sending to someone to see if I’m on track or off the rails.

I also am pondering that I may retrofit with diagrams. Sometimes I’ve put them in as I go. At other times, I go back and fill them in. I do love me a good diagram, for the reasons Larkin & Simon articulated (Connie Malamed is doing a good job on visuals over at LinkedIn this month). Sometimes I edit the ones I have as I recognize improvements, sometimes I create new ones, sometimes I throw existing ones in. It’s when I think they’ll help, but I can think of several I probably should make.

The above holds true for pretty much all writing I do beyond these posts. This is for me, first, after all! Otherwise, I solicit feedback (which I don’t always get; I think folks trust me too much, at least for shorter things). I’m sure others work different. Still, these are my thoughts on writing, again. I welcome your reflections!

Getting smarter

14 January 2025 by Clark Leave a Comment

A number of years ago now, I analyzed the corporate market for a particular approach. Not normally something I do (not a tool/market analyst), but at the time it made sense. My recommendation, at the end of the day, was the market wasn’t ready for the product. I am inclined to think that the answer would be different today. Maybe we are getting smarter?

First, why me? A couple of reasons. For one, I’m independent. You (should) know that you’ll get an unbiased (expert) opinion. Second, this product was something quite closely related to things I do know about, that is, learning experiences that are educationally sound. Third, the asker was not only a well-known proponent of quality learning, but knew I was also a fan of the work. So, while I’m not an analyst, few same would’ve really understood the product’s value proposition, and I do know the tools market at a useful level. I knew there was nothing else on the market like it, and the things that were closest I also knew (from my authoring simulation games work, as in my first book, and the research reports for the Learning Guild).

The product itself allowed you to author deep learning experiences. That is, where you immerse yourself in authentic tasks, with expert support and feedback. Learning tasks that align with performance tasks are the best practice environments, and in this case were augmented with resources available at the point of need. The main problem was that they required an understanding of deep learning to be able to successfully author. In many cases, the company ended up doing the design despite offering workshops about the underlying principles. Similarly, the industrial-strength branching simulation tools I knew then struggled to survive.

And that was my reason, then, to suggest that the market wasn’t ready. I didn’t think enough corporate trainers, let alone the managers and funding decision-makers, would get the value proposition. There still are many who are ‘accidental’ instructional designers, and more so then. The question, then, is whether such a tool could now succeed. And I’m more positive now.

I think we are seeing greater interest in learning science. The big societies have put it on their roadmaps, and our own little LDA learning science conference was well received. Similarly, we’re seeing more books on learning science (including my own), and more attention to same.  I think more folks are looking for tools that make it easy to do the right thing. Yes, we’re also confronting the AI hype, but I think after the backlash we’ll start thinking again about good, not just cheap and fast. I not only hope, but I think there’s evidence we are getting smarter and more focused on quality. Fingers crossed!

Looking forward

31 December 2024 by Clark Leave a Comment

Woman on the ocean, peering into the distance.Last week, I expressed my gratitude for folks from this past year. That’s looking back, so it’s time to gaze a touch ahead. With some thoughts on the whole idea! So here’s looking forward to 2025. (Really? 25 years into this new century? Wow!)

First, I’m reminded of the talk I heard once. The speaker, who’d if memory serves had written a book about predicting the future, explained why it was so hard. His point was that, yes, there are trends and trajectories, but he found that there was always that unexpected twist. So you could expect X, but with some unexpected twist. For instance, I don’t think anyone a year ago really expected Generative AI to become such a ‘thing’.

There was also the time that someone went back and looked at some predictions of the coming year, and evaluated them. That didn’t turn out so well, including for me! While I have opinions, they’re just that. They may be grounded in theory and 4+ decades of experience, but they’re still pretty much guesswork, for the reason above.

What I have done, instead, for a number of years now is try to do something different. That is, talk about what I think we should see. (Or to put it another way, what I’d like to see. ;). Which hasn’t changed much, somewhat sadly. I do think we’ve seen a continuing rise of interest in learning science, but it’s been mitigated by the emergence of ways to do cheaper and faster. (A topic I riffed on for the LDA Blog.) When there’s pressure to do work faster, it’s hard to fight for good.

So, doing good design is a continued passion for me. However, in the conversations around the Learning Science conference we ran late this year, something else emerged that I think is worthy of attention. Many folks were looking for ways to do learning science. That is, resolving the practical challenges in implementing the principles. That, I think, is an interesting topic. Moreover, it’s an important one.

I have to be cautious. When I taught interface design, I deliberately pushed for more cognition than programming. My audience was software engineers, so I erred on getting them thinking about thinking. Which, I think, is right. I gave practical assignments and feedback. (I’d do better now.) I think you have to push further, because folks will backslide and you want them as far as you can get them.

On the other hand, you can’t push folks beyond what they can do. You need to have practical answers to the challenges they’ll face in making the change. In the case of user experience, their pushback was internal. Here, I think it’s more external. Designers want to do good design, generally. It’s the situation pragmatics that are the barrier here.

If I want people to pay more attention to learning science, I have to find a way to make it doable in the real world. While I’m finding more nuances, which interests me, I have to think of others. Someone railed that there are too many industry pundits who complain about the bad practices (mea culpa). That is, instead of cheering on folks that they can do better. And I think we need both, but I think it’s also incumbent to talk about what to do, practically.

Fortunately, I have not only principle but experience doing this in the real world. Also, we’ve talked to some folks along the way. And we’ll do more. We need to find that sweet spot (including ‘forgiveness is easier than permission’!) where folks can be doing good while doing well.  So that’s my intention for the year. With, of course, the caveat above! That’s what I’m looking forward to. You?

The enemy of the good

10 December 2024 by Clark Leave a Comment

We frequently hear that ‘perfection is the enemy of the good’. And that may well be true. However, I want to suggest that there’s another enemy that plagues us as learning experience designers. We may be trying to do good, but there are barriers. These are worthy of explicit discussion.

You also hear about the holy trinity of engineering: cheap, fast, or good; pick two. We have real world pressures that want us to do things efficiently. For instance, we have lots of claims that generative AI will allow us to generate more learning faster. Thus, we can do more with less. Which isn’t a bad thing…if what we produce is good enough. If we’re doing good, I’ll suggest, then we can worry about fast and cheap. But doing bad faster and cheaper isn’t a good thing! Which brings us to the second issue.

What is our definition of ‘good’? It appears that, too often, good is if people ‘like’ it. Which isn’t a bad thing, it’s even the first level in the Kirkpatrick-Katzell model: asking what people think of the experience. One small problem: the correlation between what people think of an experience, and it’s actual impact, is .09 (Salas, et al, 2012). That’s zero with a rounding error! What it means is that people’s evaluation of what they think of it, and the actual impact, isn’t correlated at all. It could be highly rated and not be effective, or highly rated and be effective. Etc. At core, you can’t tell by the rating.

What should be ‘good’? The general intent of a learning intervention (or any intervention, really) is to have an impact! If we’re providing learning, it should yield a new ability to ‘do’. There are a multitude of problems here. For one, we don’t evaluate performance, so how would we know if our intervention is having an impact? Have learners acquired new abilities that are persisting in the workplace and leading to the necessary organizational change? Who knows? For another, folks don’t have realistic expectations about what it takes to have an impact. We’ve devolved to a state where if we build it, it must be good. Which isn’t a sound basis for determining outcomes.

There is, of course, a perfectly good reason to evaluate people’s affective experience of the learning. If we’re designing experiences, having it be ‘hard fun’ means we’ve optimized the engagement. This is fine, but only after, we’ve established efficacy. If we’re not having a learning impact in terms of new abilities to perform, what people think about it isn’t of use.

Look, I’d prefer us to be in the situation where perfection to be the enemy of the good! That’d mean we’re actually doing good. Yet, in our industry, too often we don’t have any idea whether we are or not. We’re not measuring ‘good’, so we’re not designing for it. If we measured impact first, then experience, we could get overly focused on perfection. That’d be a good problem to have, I reckon. Right now, however, we’re only focused on fast and cheap. We won’t get ‘good’ until we insist upon it from and for ourselves. So, let’s shall we?

Convincing stakeholders

3 December 2024 by Clark Leave a Comment

As could be expected (in retrospect ;), a recurrent theme in the discussions from our recent Learning Science Conference was how to deal with objections. For instance, folks who believe myths, or don’t understand learning. Of course, we don’t measure, amongst other things. However, we also have mistaken expectations about our endeavors. That’s worth addressing. So, here I’m talking about convincing stakeholders.

To be clear, I’m not talking about myths. Already addressed that. But is there something to be taken away? I suggested (and practiced in my book on myths in our industry), that we need to treat people with respect. I suggest that we need to:

  • Acknowledge the appeal
  • Also address what could be the downsides
  • Then, look to the research
  • Finally, and importantly, provide an alternative

The open question is whether this also applies to talking learning.

In general, when talking about trying to convince folks about why we need to shift our expectations about learning, I suggest that we need to be prepared with a suite of stories. I recognize that different approaches will work in different circumstances. So, I’ve suggested we should have to hand:

  • The theory
  • The data/research
  • A personal illustrative anecdote
  • Solicit and use one of their personal anecdotes
  • A case study
  • A case study of what competitors are doing

Then, we use the one we think works best with this stakeholder in this situation.

Can we put these together? I think we can, and perhaps should. We can acknowledge the appeal of the current approach. E.g., it’s not costing too much, and we have faith it’s working. We should also reveal the potential flaws if we don’t remedy the situation: we’re not actually moving any particular needle. Then we can examine the situation: here we draw upon one of the second list about approaches. Finally, we offer an alternative: that if we do good learning design, we can actually influence the organization in positive ways!

This, I suggest, is how we might approach convincing stakeholders. And, let me strongly urge, we need to! Currently there are far too many who believe that learning is the outcome of an event. That is, if we send people off to a training event, they’ll come back with new skills. Yet, learning science (and data, when we bother) tells us this isn’t what happens. People may like it, but there’s no persistent change. Instead, learning requires a plan and a journey that develops learners over time. We know how to do good learning design, we just have to do it. Further, we have to have the resources and understanding to do so. We can work on the former, but we should work on the latter, too.

Across Contexts

26 November 2024 by Clark Leave a Comment

(Have I talked about looking across contexts for learning before? I looked and couldn’t find it. Though I’m pretty good about sharing diagrams?!? So, here it is; if again, please bear with me).

In our recent learning science conference, one topic that came up was about contexts. That is, I suggest the contexts we see across examples and practice define the space of transfer. We know that contextual performance is better than abstract (c.f. Bransford’s work at Vanderbilt with the Cognitive Technology Group). The natural question is how to choose contexts. The answer, I suggest, is ad hoc: choose the minimal set of contexts that spans the space of transfer. What we’re talking about is looking for a set chosen across contexts that support the best learning.

A cloud of all possible applications, and inside an oval of correct applications. Within that, some clustered 'o' characters near each other, and a character 'A' further away. Then 'x' characters spaced more evenly aroud the oval, with the A inside the spanned space. So, in talks I’ve used the diagram to say that if you choose the set of contexts represented by the ‘o’s, you’ll be unlikely to transfer to A, whereas if you choose the ‘x’s, you’re much more likely. Let me make that concrete: let’s talk negotiation (something we’re all likely to experience). If all your contexts are about vendors (e.g. ‘o’s,) you may not apply the principles to negotiating with a customer, A. If, however, you have contexts negotiating with vendors, customers, maybe even employers (‘x’s), you’re more likely to transfer to other situations. (Though your employer might not like it! ;)

The point that was asked was how to choose the set. You can be algorithmic about it. If you could measure all dimensions of transfer, and ensure you’re progressing from simple to complex along those, you’d be doing the scientific best. It might lead you to choose too many, however. It may be that you can choose a suite based upon a more heuristic approach to coverage. Here I mean picking ones that provide some substantive coverage based upon expertise (say, from your SME or supervisors of performance). I suspect that you’ll have to make your best first guess and then test to see if you’re getting appropriate transfer, regardless.

It’s important to ensure that the set is minimal. You don’t want too many contexts to make the experience onerous. So pick a set that spans the space, but also is slim. The right set will illuminating the ways in which things can vary without being too large. Another criteria is to have interesting contexts. You are, I’ll suggest, free to exaggerate them a little to make them interesting if they’re not inherently so.

You may also need some times when the context says not to use the focus here. What I mean is that while it could seem appropriate to extend whatever’s being learned to this situation, you shouldn’t. Some ideas support over-generalization, and you’ll need to help people learn where those limits are.

Note that the contexts are those across both examples and practice. So, learners will see some contexts in examples, then others in practice. It may be (if it’s complex, or infrequent, or costly) that you need to have lots of practice, and this isn’t a worry. Still, making sure you’re covering the right swatch across contexts will support achieving the impact in all appropriate situations.

I’m less aware of research on the spread of contexts for transfer (PhD topic, anyone?), and welcome pointers. Still, cognitive theory suggests that this all makes sense. It does to me, how about you?

Beyond Learning Science?

19 November 2024 by Clark Leave a Comment

The good news is, the Learning Science Conference has gone well. The content we (the Learning Development Accelerator, aka LDA) hosted from our stellar faculty was a win. We’ve had lively discussions in the forum. And the face to face sessions were great! The conference continues, as the content will be there (including recordings of the live sessions). The open question is: what next? My short answer is going beyond learning science.

So, the conference was about what’s known in learning science. We had topics about the foundations, limitations, media, myths, informal/social, desirable difficulty, applications, and assessment/evaluation. What, however, comes next? Where do you go from a foundation in learning science?

My answer is to figure out what it means! There are lots of practices in L&D that are grounded in learning science, but go from there to application. My initial list looks like this:

  1. Instructional design. Knowing the science is good, but how do you put it into a process?
  2. Modalities. When you’re doing formal learning, you can still do it face to face, virtually, online, or blended. What are the tradeoffs, and when does each make sense?
  3. Performance consulting. We know there are things where formal learning doesn’t make sense. We want gaps and root causes to determine the right intervention.
  4. Performance support. If you determine job aids are the answer, how do you design, develop, and evaluate them? How do they interact with formal learning?
  5. Innovation. This could (and should; editorial soapbox) be an area for L&D to contribute. What’s involved?
  6. Diversity. While this is tied to innovation, it’s a worthy topic on its own. And I don’t just mean compliance.
  7. Technology. There are lots of technologies, what are their learning affordances? XR, AI, the list goes on.
  8. Ecosystem. How do you put the approaches together into a coherent solution for performance? If you don’t have an ‘all singing, all dancing’ solution, what’s the alternative?
  9. Strategy. There’s a pretty clear vision of where you want to be. Then, there’s where you are now. How do you get from here to there?

I’m not saying this is the curriculum for a followup, I’m saying these are my first thoughts. This is what I think follows beyond learning science. There are obviously other ways we could and should go. These are my ideas, and I don’t assume they’re right. What do you think should be the followon? (Hint: this is likely what next year’s conference will be about. ;)

What L&D resources do we use?

29 October 2024 by Clark 1 Comment

This isn’t a rhetorical question. I truly do want to hear your thoughts on the necessary resources needed to successfully execute our L&D responsibilities. Note that by resources in this particular case, I’m not talking: courses, e.g. skill development, nor community. I’m specifically asking about the information resources, such as overviews, and in particular tools, we use to do our job. So I’m asking: what L&D resources do we need?

A diagram with spaces for strategy, analysis, design, development, evaluation, implementation, evaluation, as well as topics of interest. Elements that can be considered to be included include tools, information resources, overviews, and diagrams. There are some examples populating the spaces.I’m not going to ask this cold, of course. I’ve thought about it a bit myself, creating an initial framework (click on the image to see it larger). Ironically, considering my stance, it’s based around ADDIE. That’s because I believe the elements are right, just that it’s not a good basis for a design process. However, I do think we may need different tools for the stages of analysis, design, development, implementation, and evaluation, even if don’t invoke them in a waterfall process. I also have categories for overarching strategy, and for specific learning topics. These are spaces in which resources can reside.

There are also several different types of resources I’ve created categories for. One is an overview of the particular spaces I indicate above. Another are for information resources, that drill into a particular approach or more. These can be in any format: text or video typically. Because I’m weird for diagrams, I have them separately, but they’d likely be a type of info resource. Importantly, one is tools. Here I’m thinking performance support tools we use: templates, checklists, decision trees, lookup tables. These are the things I’m a bit focused on.

Of course, this is for evidence-based practices. There are plenty of extant frameworks that are convenient, and cited, but not well-grounded. I am looking for those tools you use to accomplish meaningful solutions to real problems that you trust. I’m looking for the ones you use. The ones that provide support for excellent execution. In addition to the things listed above, how about processes? Frameworks? Models? What enables you to be successful?

Obviously, but importantly, this isn”t done! That is, I put my first best thoughts out there, but I know that there’s much more. More will come to me (already has, I’ve already revised the diagram a couple of times), but I’m hoping more will come from you too. That includes the types of resources, spaces, as well as particular instances.

The goal is to think about the resources we have and use. I welcome you putting in, via comments on the blog or wherever you see this post, and let me know which ones you find to be essential to successful execution. I’d really like to know what L&D resources do we use. Please take a minute or two and weigh in with your top and essential tools. Thanks!

Learning Science Conference 2024

15 October 2024 by Clark Leave a Comment

I believe, quite strongly, that the most important foundation anyone in L&D can have is understanding how learning really works. If you’re going to intervene to improve people’s ability to perform, you ought to know how learning actually happens! Which is why we’ve created the Learning Science Conference 2024.

We have some of the most respected translators of learning science research to practice. Presenters are Ruth Clark, Paul Kirschner, Will Thalheimer, Patti Shank, Nidhi Sachdeva, as well as Matt Richter and myself. They’ll be providing a curated curriculum of sessions. These are admittedly some of our advisors to the Learning Development Accelerator, but that’s because they’ve reliably demonstrated the ability to do the research, and then to communicate the results of theirs and others’ work in terms of the implications for practice. They know what’s right and real, and make that clear.

The conference is a hybrid model; we present the necessary concepts asynchronously, starting later this month. Then from 11- 15 November, we’ll have live online sessions led by the presenters. These are at two different times to accommodate as much of the globe as we can! In these live sessions we’ll discuss the implications and workshop issues raised by attendees. We will record the sessions in case you can’t make it. I’ll note, however, that participating is a chance to get your particular questions answered! Of course, we’ll have discussion forums too.

We’ve worked hard to make this the most valuable grounding you can get, as we’ve deliberately chosen the topics that we think everyone needs to comprehend. I suggest there’s something there for everyone, regardless of level. We’re covering the research and implications around the foundations of learning, practices for design and evaluation, issues of emotion and motivation, barriers and myths, even informal and social learning. It’s the content you need to do right by your stakeholders.

Our intent is that you’ll leave equipped to be the evidence-based L&D practitioner our industry needs. I hope you’ll take advantage of this opportunity, and hope to see you at the Learning Science Conference 2024.

Simple Models and Complex Problems

8 October 2024 by Clark Leave a Comment

I’m a fan of models. Good models that are causal or explanatory can provide guidance for making the right decisions. However, there are some approaches that are, I suggest, less than helpful. What makes a good or bad model? My problem is about distinguishing when to talk about each: simple models and complex problems.

A colleague of ours sent me an issue of a newsletter (it included the phrase ‘make it meaningful‘ ;). In it, the author was touting a four letter acronym-based model. And, to be fair, there was nothing wrong with what the model stipulated. Chunking, maintaining attention, elaboration, and emotion are all good things. What bothered me was that these elements weren’t sufficient! They covered important elements, but only some. If you just took this model’s advice, you’d have somewhat more memorable learning, but you’d fall short on the real potential impact. For instance, there wasn’t anything there about the importance of contextualized practice nor feedback. Nor models, for that matter!

I’m not allergic to n letter acronym models. For instance, I keep the coaster I was given for Michael Allen’s CCAF on my desk. (It’s a nice memento.) His Context-Challenge-Activity-Feedback model is pretty comprehensive for the elements that a practice has to have (not surprisingly). However, learning experiences need more than just practice, they need introductions, and models, and examples and closings as well as practice. And while the aforementioned elements are necessary, they’re not sufficient. Heck, Gagné talked about nine elements.

What I realize as I reflect is that I like models that have the appropriate amount of complexity for the level of description they’re talking about. Yet I’ve seen far too many models that are cute (some actually spell words) and include some important ideas but they’re not comprehensive for what they cover. The problem, of course, is that you need to understand enough to be able to separate the wheat from the chaff. I’ll suggest to look to vetted models, that are supported by folks who know, and there are criticisms and accolades to accompany them. Read the criticisms, and see if they’re valid. Otherwise, the model may be useful.

Ok, one other thing bothered me. This model supposedly has support from neuroscience. However, as I’ve expressed before, there have yet to be results that aren’t already made from cognitive science research. This, to me, is just marketing, with no real reason to include it except to try to make it more trendy and appealing. A warning sign, to me at least.

Look, designing for learners is complex. Good models help us handle this complexity well. Bad ones, however, can mislead us into only paying attention to particular bits and create insufficient solutions. When you’re looking at simple models and complex problems, you need to keep an eye out for help, but maybe it needs to be a jaundiced eye.

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok