Learnlets

Secondary

Clark Quinn’s Learnings about Learning

The Learning Styles Zombie

23 June 2015 by Clark 5 Comments

It’s June, and June is Learning Styles month for the Debunker’s Club.  Now, I’ve gone off on Learning Styles before (here, here, here, and here), but  it’s been a while, and they refuse to die. They’re like zombies, coming to eat your brain!

Let’s be clear, it’s patently obvious learners differ.  They differ in how they work, what they pay attention to, how they like to interact, and more. Surely, it make sense to adapt the learning to their style, so that we’re optimizing their outcome, right?

Er, no.  There is no consistent  evidence that adapting to learning styles works.  Hal Pashler and colleagues, on a study commissioned by Science in the Public Interest (read: a non-partisan, unbiased, truly independent work) found  (PDF) that there was no evidence that adapting to learning styles worked. They did a meta-analysis of the research out there, and concluded this with statistical rigor.  That is, some studies showed positive effects, and some showed negative, but across the body of studies suitably rigorous to be worth evaluating, there was no evidence that trying to adapt learning to learner characteristics had a  definitive impact.

At least part of the problem is that  the instruments people use to characterize learning styles are flawed.  Surely, if learners differ, we can identify how?  Not with psychometric validity (that means tests that stand up to statistical analysis). A commissioned study in the UK (like the one above, independent, etc) led by Coffield evaluated a representative sample of instruments (including the ubiquitous MBTI, Kolb, and more), and found  (PDF) only one that met all four standards of psychometric validity. And that one was a simple one of one dimensions.

So, what’s a learning designer to do?  Several things: first, design for what is being learned. Use the best learning design to accomplish the goal. Then, if the learner has trouble with that approach, provide help.  Second, do use a variety of ways of supporting comprehension.  The variety is good, even if the evidence to do so based upon learning style isn’t.  (So, for example, 4MAT isn’t bad, it’s just not based upon sound science, and why you’d want to pay to use a heuristic approach when you can do that for free is beyond me.)

Learners do differ, and we want them to succeed. The best way to do that is good learning experience design. We do have evidence that problem-based and emotionally aware learning design helps.  We know we need to start with meaningful objectives, create deep practice, ground in good models, and support with rich examples, while addressing motivation, confidence, and anxiety.  And using different media maintains attention and increases the likelihood of comprehension.  Do good learning design, and please don’t feed the zombie.

DoNotFeedLSZombie

Why Work Out Loud? (for #wolweek)

18 June 2015 by Clark 1 Comment

Why should one work out loud (aka Show Your Work)?  Certainly, there are risks involved.  You could be wrong.  You could have to share a mistake. Others might steal your ideas.  So why would anyone want to be Working Out Loud?  Because the risks are trumped by the benefits.

Working out loud is all about being transparent about what you’re doing.  The benefits of these are multiple. First, others know what you’re doing, and can help. They can provide pointers to useful information, they can provide tips about what worked, and didn’t, for them, and they’re better prepared for what will be forthcoming.

Those risks? If you’re wrong, you can find out before it’s too late.  If you share a mistake, others don’t have to make the same one.  If you put your ideas out there, they’re on record if someone tries to steal them.  And  if someone else uses your good work, it’s to the general benefit.

Now, there are times when this can be bad. If you’re in a Miranda organization, where anything you say can be held against you, it may not be safe to share.  If your employer will take what you know and then let you go (without realizing, of course, there’s more there), it’s not safe.  Not all organizations are ready for sharing you work.

Organizations, however,  should be interested in creating an environment where working out loud is safe.  When folks share their work, the organization benefits.  People know what others are working on. They can help one another.  The organization learns faster.  Make it safe to share mistakes, not for the sake of the mistake, but for the lesson learned; so no one else has to make the same mistake!

It’s not quite enough to just show your work, however, you really want to ‘narrate’ your work. So working out loud is not just about what you’re doing, but also explaining why.  Letting others see why you’re doing what you’re doing helps them either improve your thinking or learn from it.  So not just your work output improves, but your continuing ability to work gets better too!

You can blog your thoughts, microblog what you’re looking at, make your interim representations available as collaborative documents, there are many ways to make your work transparent. This blog, Learnlets, is just for that purpose of thinking out loud: so I can get feedback and input or others can benefit.  Yeah, there are risks (I have seen my blog purloined without attribution), but the benefits outweigh the risks.  That’s as an independent, but imagine if an organization made it safe to share; the whole organization learns faster. And that’s the key to the continual innovation that will be the only sustainable differentiator.

Organizations that work together effectively are organizations that will thrive.  So there are personal benefits and organizational benefits.  And I personally think this is a role for L&D (this is part of the goal of the Revolution). So, work out loud about your efforts to work out loud!

#itashare

Embrace Plan B

17 June 2015 by Clark Leave a Comment

The past two weeks, I’ve been on the road (hence the paucity of posts).  And they’ve been great opportunities to engage around interesting topics, but also have provided some learning opportunities (ahem).  The title of this post, by the way, came from m’lady, who was quoting what a senior Girl Scout said was the biggest lesson she learned from her leader, “to embrace Plan B” ;).

So two weeks ago I was visiting a client working on upping their learning game. This is a challenge in a production environment, but as I discussed many times in posts over the second half of 2014 and some this year, I think there are some serious actions that can be taken.  What is needed are better ways to work with SMEs, better constraints around what makes useful content, and perhaps most importantly what makes meaningful interaction and practice.  I firmly believe that  there are practical ways to get serious elearning going without radical change, though some initial hiccups  will be experienced.

This past week I spoke twice. First on a broad spectrum of learning directions to a group that was doing distance learning and wanted to take a step back and review what they’d been doing and look for improvement opportunities. I covered deeper learning, social learning, meta-learning, and more. Then I went beyond and talked about 70:20:10, measurement,  games and simulations, mlearning, the performance ecosystem, and more.  I then moved  on to a separate (and delightful) event in Vancouver to promote the Revolution.

It was the transition between the two events last week that threw me. So, Plan A was to fly back home on Tuesday, and then fly on to Vancouver on Wed morning.   But, well, life happened.  All my flights were delayed (thanks, American) on my flight there and back to the first engagement, and both of the first flights such that I missed the connection. On the way out I just got in later than I expected (leading to 4.5 hours sleep before the long and detailed presentation).  But on the way back, I missed the last connecting flight home.  And this had several consequences.

So, instead of spending Tuesday night in my own bed, and repacking for the next day, I spent the night in the Dallas/Fort Worth airport.  Since they blamed it on weather (tho’ if the incoming flight had been on time, it might’ve gotten out in time to avoid the storm), they didn’t have any obligation to provide accommodation, but there were cots and blankets available. I tried to pull into a dark and quiet place, but most of the good ones were taken already. I found a boarding gate that was out of the way, but it was bright and loud.  I gave up after an hour or so and headed off to another area, where I found a lounge where I could pull together a couple of armchairs and managed to doze for 2.5 or so hours, before getting up and on the hunt for some breakfast.  Lesson: if something’s not working, change!

I caught a flight back home in just enough time to catch the next one up to Vancouver. The problem was, I wasn’t able to swap out my clothes, so I was desperately in need of some laundry.  Upon arriving, I threw one of the shirts, socks, etc into a sink and gave them a wash and hung them up. (I also took a shower, which was not only a necessity after a rough night but a great way to gather myself and feel a bit more human).  The next morning, as I went to put on the shirt, I found a stain!  I couldn’t get up in front of all those people with a stained shirt.  Plan B was out the door. Also, the other shirt had acquired one too!  Plan C on the dust heap. Now what?  Fortunately, my presentation was in the afternoon, but I needed to do something.

So I went downstairs and found a souvenir shop in the hotel, but the shirts were all a wee bit too loud.  I didn’t really want to pander to the crowd quite so egregiously. I asked at the hotel desk if there was a place I could buy a shirt within walking distance, and indeed there was.  I was well and truly on Plan D by this time.  So I hiked on out to a store and fortunately found another shirt I could throw on.  Lesson: keep changing!

I actually made the story part of my presentation.  I made  the point that just like in my case, organizations need not only optimal execution of the plans, but then also the ability to innovate if the plan isn’t working.  And L&D  can (and should) play a role in this.  So, help your people be prepared to create and embrace Plan B (and C and…however many adaptations they need to have).

And one other lesson for me: be better prepared for tight connections to go awry!

Content/Practice Ratio?

9 June 2015 by Clark 7 Comments

I end up seeing a lot of different elearning. And, I have to say, despite my frequent disparagement, it’s usually well-written, the problem seems to be in the starting objectives.  But compared to learning that really has an impact: medical, flight, or military training for instance, it seems woefully under-practiced.

So, I’d roughly (and generously) estimate that the ratio is around 80:20 for content: practice.  And, in the context of moving from ‘getting it right’ to ‘not getting it wrong’, that seems woefully inadequate.  So, two questions: do we just need more practice, or do we also have too much content. I’ll put my money on the latter, that is: both.

To start, in most of the elearning I see  (even stuff I’ve had a role in, for reasons out of my control), the practice isn’t enough.  Of course, it’s largely wrong, being focused on reciting knowledge as opposed to making decisisions, but there just isn’t enough.  That’s ok  if you know they’ll be applying it right away, but that usually isn’t the case.  We really don’t scaffold the learner from their initial capability, through more and more complex scenarios, until they’re at the level of ability we want.  Where they’re performing the decisions they need to be making in the workplace with enough flexibility and confidence, and with sufficient retention until it’s actually needed.  Of course, it  shouldn’t be the event model, and that practice should be spaced over time.  Yes, designing practice is harder than just delivering content, but it’s not that much harder to develop more than just to develop some.

However, I’ll argue we’re also delivering too much content.  I’ve suggested in the past that I can rewrite most content to be 40% – 60% less than it starts (including my own; it takes me two passes).  Learners appreciate it.  We want a concise model, and some streamlined examples, but then we should get them practicing.  And then let the practice drive them to the content.  You don’t have to prepackage it as much, either; you can give them some source materials that they’ll be motivated to use, and even some guidance (read: job aids) on how to perform.

And, yes, this is a tradeoff: how do we find a balance that both yields the outcomes we need but doesn’t blow out the budget?  It’s an issue, but I suggest that, once you get in the habit, it’s not that much more costly.  And it’s much more justifiable,  when you get to the point of actually measuring your impact.  Which many orgs aren’t doing yet.  And, of course, we should.

The point is that I think our ratio should really be 50:50 if not 20:80 for content:practice.  That’s if it matters, but if it doesn’t why are you bothering? And if it does, shouldn’t it be done right?  What ratios do you see? And what ratios do  you think makes sense?

Disrupting Education

3 June 2015 by Clark 2 Comments

The following was prompted by a discussion on how education has the potential to be disrupted.  And I don’t disagree, but I don’t see the disruptive forces marshaling that I think it will take.  Some thoughts I lobbed in another forum (lightly edited):

Mark Warschauer, in his great book Learning in the Cloud (which has nothing to do with ‘the cloud’), pointed out that there are only 3 things wrong with public education: the curricula, the pedagogy, and the way they use tech; other than that they’re fine. Ahem. And much of what I’ve read about disruption seems flawed in substantial ways.

I’ve seen the for-profits institutions, and they’re flawed because even if they did understand learning (and they don’t seem to), they’re handicapped: they have to dance to the ridiculous requirements of accrediting bodies. Those bodies don’t understand why SMEs aren’t a good source of objectives, so the learning goals are not useful to the workplace. It’s not the profit requirement per se, because you could do good learning, but you have to start with good objectives, and then understand the nuances that make learning effective. WGU is at least being somewhat disruptive on the objectives.

MOOCs don’t yet have a clear business model; right now they’re subsidized by either the public institutions, or biz experiments.  And the pedagogy doesn’t really scale well: their objectives also tend to be knowledge based, and to have a meaningful outcome they’d need to be application based, and you can’t really evaluate that at scale (unless you get *really* nuanced about peer review, but even then you need some scrutiny that just doesn’t scale.). For example, just because you learn to do AI programming doesn’t mean you’re ready to be an AI programmer.  That’s the xMOOCs, the cMOOCs have their own problems with  expectations around self-learning skills.  Lovely dream, but it’s not the world I live in, at least yet.

As for things like the Khan Academy, well, it’s a nice learning adjunct, and they’re moving to a more complete learning experience, but they’re still largely tied to the existing curricula (e.g. doing what Jonassen railed against: the problems we give kids in schools bear no relation to the problems they’ll face in the real world).

The totally missed opportunity across all of this is the possibility of layering 21C skills across this in a systematic and developable way. If we could get a better curricula, focused on developing applicable skills and meta-skills, with a powerful pedagogy, in a pragmatically deliverable way…

Lots of room for disruption, but it’s really a bigger effort than I’ve yet seen someone willing to take. And yet, if you did it right, you’d have an essentially unassailable barrier to entry: real learning done at scale. However, I’m inclined to think that it’s more plausible in the countries who increasingly ‘get’ that higher ed is an investment in the future of a country, and are making it free, and make it a ‘man on the moon’ program. I’m willing, even eager to be wrong on this, so please let me know what you think!

Model responses

2 June 2015 by Clark Leave a Comment

I was thinking about how to make meaningful practice, and I had a thought that was tied to some previous work that I may not have shared here.  So allow me to do that now.

Ideally, our practice has us performing in ways that are like the ways we perform in the real world.  While it is possible to make alternatives available that represent different decisions, sometimes there are nuances that require us to respond in richer ways. I’m talking about things like writing up an RFP, or a response letter, or creating a presentation, or responding to a live query. And while these are desirable things, they’re hard to evaluate.

The problem is that our technology to evaluate freeform text is difficult, let alone anything more complex.  While there are tools like latent semantic analysis that can be developed to read text, it’s complex to develop and  it won’t work on spoken responses , let alone spreadsheets or slide decks (common forms of business communication).  Ideally, people would evaluate them, but that’s not a very scalable solution if you’re talking about mentors, and even peer review can be challenging for asynchronous learning.

An alternative is to have the learner evaluate themselves.  We did this in a course on speaking, where learners ultimately dialed into an answering machine, listened to a question, and then spoke their responses.  What they then could do was listen to a model response as well as their response.  Further, we could provide a guide, an evaluation rubric, to guide  the learner in evaluating their response  in respect to the model response (e.g. “did you remember to include a statement and examples”?).

This would work with more complex items, too.  “Here’s a model spreadsheet (or slide deck, or document); how does it compare to yours?”  This is very similar to the types of social processing you’d get in a group, where you see how someone else responded to the assignment, and then evaluate.

This isn’t something you’d likely do straight off; you’d probably scaffold the learning with simple tasks first.  For instance, in the example I’m talking about we first had them recognize well- and poorly-structured responses, then create them from components, and finally create them in text before having them call into the answering machine. Even then, they first responded to questions they knew they were going to get before tasks where they didn’t know the questions.  But this approach serves as an enriching practice on the way to live performance.

There is another benefit besides allowing the learner to practice in richer ways and still get feedback. In the process of evaluating the model response and using an evaluation rubric, the learner internalizes the criteria and the process of evaluation, becoming a self-evaluator and consequently a self-improving learner.  That is, they use a rubric to evaluate their response and the model response. As they go forward, that rubric can serve to continue to guide as they move out into a performance situation.

There are times where this may be problematic, but increasingly we can and should mix media and use technology to help us close the gap between the learning practice and the performance context. We can prompt, record learner answers, and then play back theirs and the model response with an evaluation guide.  Or we can give them a document template and criteria, take their response, and ask them to evaluate theirs and another, again with a rubric.  This is richer practice and helps shift the learning burden to the learner, helping them  become self-learners.   I reckon it’s a good thing. I’ll suggest that you  consider this as another tool in your repertoire of ways to create meaningful practice. What do you think?

Attention to connections

27 May 2015 by Clark 1 Comment

A colleague was describing his journey, and attributed much of his success (rightly) to his core skills: including his creativity. I was resonating with his list until I got to ‘attention to detail’, and it got me to thinking.

Attention to detail is good, right?  We want people to sweat the nuances, and I certainly am inspired by folks who do that. But there are times when I don’t want to be responsible for the details. To be sure, these are times when it doesn’t make sense to have me do the details. For example, once I’ve helped a client work out a strategy, the implementation really largely should be on them, and I might take some spot reviews (far better than just helping them start and abandoning them).

So  I wondered about what the alternative would be. Now the obvious thought is lack of attention to detail, which might initially be negative, but could there be a positive connotation?   What came to me was attention to  connections. That is, seeing how what’s being considered might map to a particular conceptual model, or a related field. Seeing how it’s contextualized, and bringing together solutions.    Seeing the forest, not the trees.

I’m inclined to think that there are benefits to those who see connections, just as there is a need for those who can plug away at the details.  And it’s probably contextual; some folks will be one in one area and another in another.  For example, there are times I’m too detail oriented (e.g. fighting for conceptual clarity), and times where I’m missing connections (particularly in reading the politics of a situation).  And vice-versa, times when I’m not detail-0riented enough, and  very good at seeing connections.

They’re probably  not ends of a spectrum, either, as I’ve gone away from that in practical matters (hmm, wonder what that implies about the Big 5?). Take introvert and extrovert, from a learning perspective it’s about how well you learn on your own versus how well you learn with others, and you could be good or bad at each or both.  Similarly here, you could be able to do both (as in  my colleague, he’s one of the smartest folks I know who is demonstrably innovative and connecting as well as being able to sweat the details whether writing code or composing music).

Or maybe this is all a post-hoc justification for wanting to play out at the conceptual frontier, but I’m not going to apologize for that.  It seems to work…

Evolutionary versus revolutionary prototyping

26 May 2015 by Clark 2 Comments

At a recent meeting, one of my colleagues mentioned that increasingly people weren’t throwing away prototypes.  Which prompted reflection, since I have been a staunch advocate for revolutionary prototyping (and here I’m  not talking about “the”  Revolution ;).

When I used to teach user-centered  design, the tools for  creating interfaces were complex. The mantras were test early, test often, and I advocated  Double Double P’s (Postpone Programming, Prefer Paper; an idea I first grabbed from Rob Phillips then at Curtin).  The reason was that if you  started building too early in the design phase, you’d have too much invested to throw things away if they weren’t working.

These days, with agile programming, we see sprints producing working code, which then gets elaborated in subsequent sprints.  And the tools make it fairly easy to work at a high level, so it doesn’t take too much effort to produce something. So maybe we can make things that we can throw out if they’re wrong.

Ok, confession time, I have to say that I don’t quite see how this maps to elearning.  We have sprints, but how do you have a workable learning experience and then elaborate it?  On the other hand, I know Michael Allen’s doing it with SAM and Megan Torrance just had an article on it, but I’m not clear whether they’re talking storyboard, and then coded prototype, or…

Now that I think about it, I think it’d be good to document the core practice mechanic, and perhaps the core animation, and maybe the spread of examples.  I’m big on interim representations, and perhaps we’re talking the same thing. And if not, well, please educate me!

I guess the point is that I’m still keen on being willing to change course if we’ve somehow gotten it wrong.  Small representations is good, and increasing fidelity is fine, and so I suppose it’s okay if we don’t throw out prototypes  often as long as we do when we  need to.  Am I making sense, or what am I missing?

Symbiosis

20 May 2015 by Clark Leave a Comment

One of the themes I‘ve been strumming in presentations is one where we complement what we do well with tools that do well the things we don‘t. A colleague reminded me that JCR Licklider  wrote of this decades ago (and I‘ve similarly followed the premise from the writings  of Vannevar Bush, Doug Engelbart, and Don Norman, among others).

We‘re already seeing this.     Chess has changed from people playing people, thru people playing computers and computers playing computers, to computer-human pairs playing other computer-human pairs. The best competitors aren‘t the best chess players or the best programs, but the best pairs, that is the player and computer that best know how to work together.

The implications are to stop trying to put everything in the head, and start designing systems that complement us in ways that assure that the combination is the optimized solution to the problem being confronted. Working backwards [], we should decide what portion should be handled by the computer, and what by the person (or team), and then design the resources and then training the humans to use the resources in context to achieve the goals.

Of course, this is only in the case of known problems, the ‘optimal execution‘ phase of organizational learning. We similarly want to have the right complements to support the ‘continual innovation‘ phase as well. What that means is that we have to be providing tools for people to communicate, collaborate, create representations, access and analyze data, and more. We need to support ways for people to draw upon and contribute to their communities of practice from their work teams. We need to facilitate the formation of work teams, and make sure that this process of interaction is provided with just the right amount of friction.

Just like a tire, interaction requires friction. Too little and you go skidding out of control. Too much, and you impede progress. People need to interact constructively to get the best outcomes. Much is known about productive interaction, though little enough seems to make it‘s way into practice.

Our design approaches need to cover the complete ecosystem, everything from courses and resources to tools and playgrounds. And it starts by looking at distributed cognition, recognizing that thinking isn‘t done just in the head, but in the world, across people and tools. Let‘s get out and start playing instead of staying in old trenches.

Ch-ch-ch-changes

19 May 2015 by Clark 4 Comments

Is there an appetite for change in L&D? That was the conversation I‘ve had with colleagues lately. And I have to say that that the answer is mixed, at best.

The consensus is that most of L&D is comfortably numb. That L&D folks are barely coping with getting courses out on a rapid schedule and running training events because that‘s what‘s expected and known. There really isn‘t any burning desire for change, or willingness to move even if there is.

This is a problem. As one commented: “When I work with others (managers etc) they realise they don’t actually need L&D any more”. And that‘s increasingly true: with tools to do narrated slides, screencasts, and videos in the hands of everyone, there‘s little need to have the same old ordinary courses coming from L&D. People can create or access portals to share created and curated resources, and social networks to interact with one another. L&D will become just a part of HR, addressing the requirements – onboarding and compliance – everything else will be self-serve.

The sad part of this is the promise of what L&D could be doing. If L&D started facilitating learning, not controlling it, things could go better. If L&D realized it was about supporting the broad spectrum of learning, including self-learning, and social learning, and research and problem-solving and trouble-shooting and design and all the other situations where you don‘t know the answer when you start, the possibilities are huge. L&D could be responsible for optimizing execution of the things they know people need to do, but with a broader perspective that includes putting knowledge into the world when possible. And L&D could be also optimizing the ability of the organization to continually innovate.

It is this possibility that keeps me going. There‘s the brilliant world where the people who understand learning combine with the people who know technology and work together to enable organizations to flourish. That‘s the world I want to live in, and as Alan Kay famously said: “the best way to predict the future is to invent it.” Can we, please?

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok