Learnlets

Secondary

Clark Quinn’s Learnings about Learning

A very insightful framework

11 June 2019 by Clark 1 Comment

Jane Hart has just come up with something new and, to me, intriguing. Ok, so she’s a colleague from the Internet Time Alliance, and I’ve been a fan of her work for a while, but I think this is particularly good.  If you’ve read here before, you’ll know I love a good model (Harold Jarche’s Seek>Sense>Share comes to mind). So when I parsed her “from training to modern workplace learning”, it resonated in many ways.  So here’s her framework with some comments.

First, some context. If you’ve known my work at all, you know that I’ve been pushing a L&D revolution. And that’s about rethinking training to be about transformative experience design, performance support to be included, and informal learning to be also addressed. That’s  intellectricity! And it’s sometimes hard to tie them together coherently.

Jane’s always had a talent for drilling down into the practicalities in sensible ways. Her books, continually updated, have great specifics about things to do. This is a framework that ties it together nicely.

The thing I like is the way she’s characterized different activities. The categories of Discovery (informal learning), Discourse (social learning), and Doing (experiential learning) provides a nice handle around which to talk about elements, roles, and tasks. And, importantly, prescriptions.  And I really like the ‘meta’ layer, where she suggests skills for each vertical.

I’m not without quibbles, however small. For instance, with her use of microlearning, because of my concerns about the label rather than her specific intention. She told me personally that she means “short daily learning”, and I think that’s great. I just think of that as spaced learning ;). And I might label ‘discovery’ to be ‘develop’, because it’s about the individual’s continual learning. And I’m not sure there’s what I call ‘slow’ innovation there, creating a culture and practices about experimentation and exposure to the ‘adjacent possible’. But it’s hard for one diagram to capture everything, and this does a great job.

I admit that I haven’t parsed all the nuances yet. But as an advocate of diagrams  and frameworks, I think this is truly insightful  and  useful. (And she’s updated it so I’ve grabbed this copy which appears to have lost microlearning.)   I’m sure she, as well as I, welcome your thoughts!

Labels for what we do

4 June 2019 by Clark 5 Comments

Several labelsOf late there’s been a resurrection of a long term problem. While it’s true for our field as a whole, it’s also true for the specific job of those who design formal learning. I opined about the problem of labels for what we do half a year ago, but it has raised its head again. And this time, some things have been said that I don’t fully agree with. So, it’s time again to weigh in again.

So, first, Will Thalheimer wrote a post in which he claims to have the ultimate answer (in his usual understated way ;). He goes through the usual candidates of labels for what we do – instructional designer, learning designer, learner experience designer – and finds flaws.

And I agree with him on learning designer and instructional designer. We can’t actually design learning, we can only create environments where learning can happen. It’s a probabilistic game. So learning designer is out.

Instructional designer, then, would make sense, but…it’s got too much baggage.  If we had a vision of instruction that included the emotional elements – the affective and conative components – I could buy it. And purists will say they do (at least, ones influenced by Keller). But I will suggest that the typical vision is of a behavioristic approach. That is, with a rigorous focus on content and assessment, and less pragmatic approaches to spacing and flexibility.

He doesn’t like learning engineer for the same reason as learning designer: you can’t ‘engineer’ learning. I don’t quite agree. One problem is that right now there are two interpretations of learning engineer. My original take on that phrase was that it’s about applying learning science to real problems. Just as a civic engineer applies physics…and I liked that. Though, yes, you can lead learners to learning, but you can’t make them think.

However, Herb Simon’s original take (now instantiated in the IEEE’s initiative on learning engineering) focused more on the integration of learning science with digital engineering. And I agree that’s important, but I’m not sure one person needs to be able to do it all. Is the person who engineers the underlying content engine the same one as the person who designs the experiences that are manifest out of that system? I think the larger picture increasingly relies on teams. So I’m taking that out of contention for now.

Will’s answer: learning architect. Now, in my less-than-definitive post last year, I equated learning experience designer and learning architect, roughly. However, Will disparages the latter and heaps accolades on the former. My concern is that architects design a solution, but then it gets not only built by others, but gets interior designed by others, and… It’s too ‘hands off’!  And as I pointed out, I’ve called myself that recently, but in that role I may have been more an architect ;).

His argument against learning experience designer doesn’t sit well with me. Ignoring the aspersions cast against those who he attributes the label to, his underlying argument is that just designing experiences isn’t enough. He admits we can’t ensure learning, but suggests that this is a weak response. And here’s where I disagree. I think the inclusion of experience does exactly what I want to focus on: the emotional trajectory and the motivational commitment. Not to the exclusion of the learning sciences, of course. AND, I’d suggest, also recognizing that the experience is  not an event, but an extended set of activities. Specifically, it will be across technologies as needed.

The problem, as Jane Bozarth raised in a column, is more than just this, however. What research into the role shows is that there are just too many jobs being lumped under the label (whatever it is). Do you develop too? Do you administer the LMS? The list goes on.

I think we need to perhaps have multiple job titles. We can be an instructional designer, or a learning experience designer, or an instructional technologist. Or even a learning engineer (once that’s clear ;). But we need to keep focused, and as Jane advised, not get too silly (wizard?). It’s hard enough as it is to describe what we do without worrying about labels for it. I think I’ll stick with learning experience designer for now. (Not least because I’m running a workshop on learning experience design at DevLearn this fall. ;) That’s my take, what’s yours?

How (Not) To Write Marketing Posts

29 May 2019 by Clark 2 Comments

Shredded paperYou’ve seen my takedowns of various posts by now, and the flurry of fluff continues. It seems like there’s some baseline social media marketing course that everyone takes.  And the very first thing is a series of steps that yields annoyance and embarrassment (or should).  For the sake of all of us who suffer from this, we need to stop! We need better posts for our industry. Even if they’re for promotion (I get it), we need more sensible marketing posts.

So, the steps seem to be:

  1. Write a post (more below)
  2. Do a search with a keyword from the post to find related posts
  3. Write to every blog author you find and offer them to link to your post

And, as one of the people who blogs (e.g. here),  please stop!  I have a canned response that includes the line:

I deliberately ignore what comes unsolicited, and instead am triggered by what comes through my network: Twitter, Facebook, LinkedIn, Skype, etc.

Now, one of the problems is that many posts I see seem to follow a similar algorithm:

  1. Search for articles on a hot buzzword
  2. Pull together some points from the articles you find
  3. Mash it up as a post

The articles appear to be written by someone who doesn’t really know the industry. How do you explain the fact that they seem to be idiosyncratic collections of buzzwords and elements? Maybe newbie social media marketing hires are writing them? I don’t know. What I do know is that they’re worthy of evisceration.

If you don’t know what you’re talking about, please don’t write. Get someone who does! There are a number of folks you could find to write for you who can do a decent job.  I write for a couple of organizations that are willing to invest in quality. And there are some folks in the industry who work and write for their orgs that know what they’re talking about. And, of course, the blog posts and articles from people with a good reputation are places to look. But just because someone’s written something doesn’t mean it’s good.

The same rules for debunking apply here: is this someone with a known reputation? Or is there some independent validation of what they say? Otherwise, you either dismiss it, or track it back and analyze it in reference to what’s known. And that, of course, means knowing it yourself.

I’ll continue to eviscerate the marketing posts that come my way, and try to point the way to better thoughts in the area. I invite you to do the same! And I’m open to ideas about how to cut down on the number of wrong (if not actively misleading, and certainly self-serving) posts. Your thoughts?

Facilitate is the new train

9 May 2019 by Clark Leave a Comment

Boy riding bike with training wheelsOk, so I’m being provocative with the title, since I’m not advocating the overthrow of training. The main idea is that a new area for L&D is facilitation. However, this concept also updates training. It’s part of what I was arguing when I suggested that the new term for L&D should be P&D, Performance & Development. So let’s start with that. We need to facilitate in several directions!

The driver behind the suggested nomenclature change is that the focus of L&D needs a shift. The revolutionary point of view is that organizations need both optimal execution and continual innovation (read: learning). In this increasingly chaotic time, the former is only the cost of entry, but it can’t be ignored. The latter is also becoming more and more critical!

A performance focus is the key to execution. You want to ensure people are doing what’s known about what’s need to be done. That’s the role of instruction and performance support. Performance consulting is the way to work backwards from the problem and determine the best interventions to do that optimization.

However, learning science is pushing us to recognize that we can do better. Information dump and knowledge test isn’t going to lead to any change in behavior. If you want people to be able to  do, you have to have them  do in practice. Which means the focus is on the practice and the feedback. That latter is facilitation. The clichéd switch from sage on the stage to guide on the side does capture it. So even here we see the need for facilitation.

It’s in the latter, however, where facilitation really comes to the fore. When we talk about development, we’re going beyond developing the individual. We are addressing the organization’s learning. And, as I’ve said, innovation  is learning, just a different sort. What’s needed is  informal learning.

And informal learning, while natural, isn’t always optimal. Habits, misconceptions, culture, and more can intrude. This is why facilitation may be even more key to success for organizations.

And, again, L&D  should be the most knowledge about learning, because learning underpins both performance and development.    Thus, if L&D is going to adapt, learning how to facilitate learning will be core. Facilitate really will be the new ‘train’.

Competencies for L&D Processes?

1 May 2019 by Clark Leave a Comment

We have competencies for people. Whether it’s ATD, LPI, IBSTPI, IPL, ISPI, or any other acronym, they’ve got definitions for what people should be able to do. And it made me wonder, should there be competencies for processes as well? That is, should your survey validation process, or your design process, also meet some minimum standards?  How about design thinking? There are things you  do  get certified in, including such piffle as MBTI and NLP.  So does it make sense to have processes meet minimum standards?

One of the things I do is help orgs fine-tune their design processes. When I talk about deeper elearning, or we take a stand for serious elearning, there are nuances that make a difference. In these cases, I’m looking for the small things that will have the biggest impact. It’s not  about trying to get folks to totally revamp their processes (which is a path to failure).  Yet, could we go further?

I was wondering whether we should certify processes. Certainly, that happens in other industries. There are safety processes in maintenance, and cleanliness in food operations, and so on. Could and should we have them for learning? For performance consulting, instructional design, performance support design, etc?

Could we state what a process should have as a minimum requirement? Certain elements, at least, at certain way points? You could take Michael Allen’s SAM and use it as a model, for instance. Or Cathy Moore’s Action Mapping. Maybe Julie Dirksen’s Design For How People Learn could be created as such. The point being that we could stipulate some way points in design that would be the minimum to be counted as sufficient for learning to occur. Based upon learning science, of course. You know, deliberate and spaced practice, etc.

Then the question is, should we? Also, could we agree? Or, of course, people could market alternative process certifications. It appears this is what Quality Matters does, for instance, at least K12 and higher ed. It appears IACET does this for continuing education certification. Would an organization certification matter? For customers, if you do customer training? For your courses, if you provide them as a product or service? Would anyone care that you meet a quality standard?

And it could go further. Performance support design, extended learning experience design (c.f. coaching), etc.  Is this something that’s better at the person level than the process level?

Should there be certification for compliance with a competency about the quality of the learning design process? Obviously in some areas. The question is, does it matter for regular L&D? On one hand, it might help mitigate against the info dump/knowledge test courses that are the bane of our industry. On the other hand, it might be hard to find a workable definition that could suit the breadth of ways in which people meet learning needs.

All I know is that we have standards about a lot of things. Learning data interchange. Individual competencies. Processes in education. Can and should there be for L&D processes? I don’t know. Seriously. I’m just pondering. I welcome your thoughts.

Learning Tools and Uni Change

11 April 2019 by Clark Leave a Comment

As part of a push for Learning Engineering, Carnegie Mellon University recently released their learning design tools. I’ve been aware of CMU’s Open Learning Initiative for a suite of reasons, and their tools for separate reasons. And I think both are good. I don’t completely align with their approach, but that’s ok, and I regularly cite their lead as a person who’s provided sage advice about doing good learning design. Further, their push, based upon Herb Simon’s thoughts about improving uni education, is a good one. So what’s going on, and why?

First, let’s be fair, most uni learning design isn’t very good. It’s a lot of content dump, and a test. And, yes, I’m stereotyping.  But it’s not all that different from what we see too often in corporate elearning. Not enough practice, and too much content. And we know the reasons for this.

For one, experts largely don’t have access to what they do, consciously, owing to the nature of our cognitive architecture. We compile information away, and research from the Cognitive Technology Group at the University of Southern California has estimated that 70% of what experts do isn’t available. They literally can’t tell you what they do!  But they can tell you what they know.  University professors are not only likely to reflect this relationship, they frequently may not actually be practitioners, so they don’t really  do!  We’ve compounded the likely focus on ‘know’, not do.

And, of course, most faculty aren’t particularly rewarded for teaching. Even lower tiers on the Carnegie scale of research institutions dream and hire on the potential for research.  There may be lip service to quality of teaching but if you can publish and get grants, you’re highly unlikely to be let go without some sort of drastic misstep.

And the solution isn’t, I suggest, trying to get faculty to be expert pedagogues. I suggest that the teaching quality of an institution is perceived, except perhaps the top tier institutions, as a mark of the quality of the institution. And yet the efforts to make teaching important, supported, valued, etc, tends to still be idiosyncratic. Yes, many institutions are creating central bodies to support faculty in improving their classes, but those folks are relatively powerless to substantially change the pedagogy unless they happen to have an eager faculty member.

CMU’s tools align, largely, with doing the right thing, and this  is important. The more tools that make it easy to do the right thing, rich pedagogies, the better. It makes much more sense, for instance, to have a default be to have separate feedback for each wrong answer than the alternative. Not that we always see that…but that’s an education problem. We need faculty and support staff to ‘get’ what good learning design is.

Ultimately, this is a good push forward. Combined with greater emphasis on teaching quality, even a movement towards competencies, and rigor in assessment, there’s a hope to get meaningful outcomes from our higher education investment. What I’ve said about K12 also holds true for higher ed, it’s both a curriculum  and a pedagogy problem. But we can and should be pushing both forward. Here’s to steps in the right direction!

Better Benchmarking

20 March 2019 by Clark Leave a Comment

I was on a phone conversation and was asked whether I compared my clients against others in the business to help them figure out where they’re at. E.g. do I offer my partners the chance to benchmark. And after a bit of thought, I said that no, I didn’t, and explained why. Moreover, they found my answer intriguing, so I thought I’d share it with you.

So, as I’ve said before, I don’t like best practices. In fact, as I’ve written before, I think we shouldn’t benchmark ourselves against others. That’s a bad practice. Why? Because then we’re comparing ourselves against a relative measure. And I think we should be comparing ourselves to a principled metric about where we could and should be.

Principles and Approaches for L&DIn fact, in the Revolution book, I created such a benchmark.  Using my performance ecosystem model, I took the six fundamental elements and elaborated them. The first core element I documented is a learning  culture. That’s accompanied by the approach to formal learning, looking at your instructional design and delivery. Then you move to performance focus, how you’re supporting performance in the world. We move on to social, how you’re facilitating informal learning and innovation. The next step is how you measure what you’re doing. Finally, there’s your infrastructure, how you’re creating the ecosystem environment. For each here I have a principle and an approach.

What I’ve done in the benchmarking instrument is take each of these and extend them. So, for each element broke it into two components, as there are nuances. And, for each, I proposed four levels of maturity:

  • Unaware: here you’re not thinking ecosystem
  • Initiating: now you’re beginning to establish an ecosystem approach
  • Mature: you’ve reached a working approach
  • Leading: at this level you’re setting the pace, thinking ahead

Thus, rather than benchmarking yourself against others, you have a principled approach with which to measure yourself. This instrument is fully elaborated in the Revolutionize L&D book, and goes into detail on each of the twelve rows.

And that was my response to the query. As a person, on principle you’re not supposed to compare yourself to others, but to your own progress. How to set your benchmarks? Against formal criteria. The same is true for organizations. I’ve tried to make a scrutable framework, the Revolution Field Guide, so to speak. So, please, look to best principles, not practices, and evaluate yourself similarly.

 

Chasing Technology Good and Bad

19 March 2019 by Clark Leave a Comment

I’ve been complaining, as part of the myths tour, that everyone wants the magic bullet. But, as I was commenting to someone, there are huge tech opportunities we’re missing. How can I have it both ways?  Well, I’m talking about two different techs (or, rather, many).  The fact is, we’re chasing the wrong technologies.

The problem with the technologies we’re chasing is that we’re chasing them from the wrong beginning. I see people chasing microlearning, adaptive learning, video, sims, and more as  the answer. And of course that’s wrong. There  can’t be one all-singing all-dancing solution, because the nature of learning is remarkably diverse. Sometimes we need reminders, sometimes deep practice, some times individualization makes sense, and other times it’s not ideal.

The part that’s really wrong here is that they’re doing this  on top of bad design!  And, as I believe I’ve mentioned, gilded bad design is still bad design.  Moreover,  if people actually spent the time and money first on investing just in improving their learning design, they’d get a far better return on investment than chasing the latest shiny object.  AND, later investments in most anything would be better poised to actually be worthwhile.

That would seem to suggest that there’s not a sensible tech to chase. After, of course, authoring tools and creating elearning. And that’s not true. Investment in, say, sims makes sense if you’re using it to implement good design (e.g. deep practice).  As part of a good learning design  strategy.  But there’s something deeper I’m talking about. And I’ve talked about it before.

What I’m talking about are content systems. They may seem far down the pike, but let me (again) make the case about why they make sense now, and for the future. The thing is, being systematic about content has both short-term  and  long-term benefits. And you can use the short-term ones to justify the long-term ones (or vice-versa).

In the short term, thinking about content from a systems perspective offers you rigor. While that may seem off-putting, it’s actually a benefit.  If you design your content model around good learning design, you are moving towards the first step, above, about good design. And, if you write good descriptions within those elements, you  really provide a foundation that makes it difficult to do bad design.

My point is that we’re ignoring meaningful moves to chase chimera. There are real value steps to make, including formalizing design processes  and  tools about good design. And there are ways to throw your money away on the latest fad.  It’s your choice, but I hope I’ve made a case for one interpretation. So, what’s yours?

Curriculum or pedagogy?

12 March 2019 by Clark Leave a Comment

In a conversation today, I mentioned that previously I’ve thought that perhaps the best next ‘man in the moon’ project would be to put an entire K12 curriculum up online. And, I’ve also thought that the only way to really fix things is to train trainers of teachers to learn to facilitate learning  around meaningful activity. And, of course, both are needed. What am I thinking?

So, there are huge gaps in the ways in which folks have access to learning. For example, I worked on a project that was trying to develop some K12 curricula online, to provide support for learners in HS that might not have sufficiently capable learners. The project had started with advanced learners, but recognized that wasn’t the only gap. And this is in California!  So I have argued for a massive project, but using advanced curricula and pedagogy.

And, at the other end, as I spoke at a conference looking to talk about improving education in India. There, they have a much bigger need for good teachers than they can reach with their education schools. I was arguing for a viral teacher prep. The idea being not just to train teachers, but train the trainers of those teachers. Then the training could go viral, as just teaching teachers wouldn’t go fast enough.

And both are right, and not enough. In the conversation, I resurrected both points and am now reflecting how they interact. The simple fact is that we need a better curriculum and a better pedagogy. As Roger Schank rightly points out, things like the quadratic equation are nuts to keep in a K12 curricula. The fact is that our curricula came from before the  Industrial Age and is barely adequate there. Yet we’re in an Information Age. And our pedagogy is aligned to tests, not to learning nor doing. We should be equipping kids with actionable knowledge to make meaningful decisions in their lives, not with arbitrary and abstract knowledge that isn’t likely to transfer.

And, of course, even if we did have such a curriculum online, we’d need teachers who could facilitate learning in this way. And that’s a barrier not just in India. The point being that most of the world is suffering with bad curricula and pedagogy. How do we make this change.

And I don’t have an answer. I think we should put both online, and support on the ground. We need that content, available through mobile to reach beyond the developed world, and we need the facilitators. They can be online, as I think about it, but they need to understand the context on the ground if they’re not there. They are context-specific necessities. And this is a massive problem.

Principle says: start small and scale. There are institutions doing at least parts of this, but scaling is a barrier. And again, I have no immediate solution other than a national (or international) initiative. We don’t want just one without the other. I don’t want teachers facilitating the old failed curricula, and I don’t want current pedagogies working on the new curricula. (And I shudder at the thought of a pre-college test in the old style trying to assess this new model!) I welcome your thoughts!

Thoughts on strategy from Training 19

6 March 2019 by Clark Leave a Comment

So last week I was the strategy track coach for the Training 19 conference. An experiment! That meant that I picked the sessions from a list of those who put their session proposals up for ‘strategy’, and could choose to open and/or close the track. I chose both. And there were thoughts on strategy from the sessions and the attendees that are worth sharing.

I chose the sessions mainly on two criteria: coverage of the topics, and sessions that sounded like they’d give real value.  I was lucky, the latter happened! While I didn’t get the complete coverage I wanted, I  did get a good spread of topics. So I think the track worked. As to the coaching, there wasn’t much of that, but I’ve sent in suggestions for whoever does it next year.

I knew two of the presenters, and some were new. My goal, again, was real coverage. And they lived up to it. Friend and colleague Michael Allen practiced what he preached while talking about good learning design, as he does. He was followed by Karen Polhemus &amp Stephanie Gosteli who told a compelling tale of how they were managing a  huge initiative by combining training with change management. Next was JD Dillon, another friend, talked about his experiences building learning ecosystems that deemphasized courses based upon data and his inferences. Alwyn Klein made an enthusiastic and compelling case for doing performance consulting  before you start.  Haley Harris & Beth Wisch went deep about data in talking about how they met the needs for content by curating.  Joe Totherow talked games as a powerful learning tool. Finally, Alex Kinnebrew pushed for finding stakeholder voices as a complement to data in making strategy.

Performance EcosystemI bookended these talks. I opened by making the case for doing optimal execution right, meaning doing proper learning design and performance support. Then I talked about driving for continual innovation with social and informal.  I closed by laying out the performance ecosystem diagram (ok, so I replaced ‘elearning’ in the diagram with ‘training’, and that’s probably something I keep), and placed the coming talks on it, so that attendees would know where the talks fit. I mostly got it right ;).  However, the feedback suggested that for those who complained, it’s because  I took too long to get to the overview. Useful feedback.  

I finished with a 3 hour strategy session where I walked people through each element of the ecosystem (as I cut it), giving them examples, providing self-assessment, and items to add to their strategy for that element. I closed by suggesting that it was up to them to sequence, based upon their particular context. Apparently, people  really liked this opportunity. One challenge was the short amount of time; this is usually run as a full day workshop.

It’s clear that folks are moving to thinking ‘outside of the box’, and I’m thrilled. There were good audiences for the talks in a conference focused on doing training! It’s definitely time for thoughts on strategy. Perhaps, as has happened before, I was ahead of the time for the revolution. Here’s to a growing trend!

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.