Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Making it visible and viral

22 February 2012 by Clark 2 Comments

On a recent client engagement, the issue was spreading an important initiative through the organization.  The challenges were numerous: getting consistent uptake across management and leadership, aligning across organizational units, and making the initiative seem important and yet also doable in a concrete way.  Pockets of success were seen, and these are of interest.

For one, the particular unit had focused on making the initiative viral, and consequently had selected and trained appropriate representatives dispersed through their organization. These individuals were supported and empowered to incite change wherever appropriate.  And they were seeing initial signs of success. The lesson here is that top down is not always sufficient, and that benevolent infiltration is a valuable addition.

The other involvement was also social, in that the approach was to make the outcomes of the initiative visible. In addition to mantras, graphs depicting status were placed in prominent places, showing current status.  Further, suggestions for improvement were not only solicited, but made visible and their status tracked.  Again, indicators were positive on these moves.

The point is that change is hard, and a variety of mechanisms may  be appropriate.  You need to understand not just what formal mechanisms you have, but also how people actually work.  I think that too often, planning fails to anticipate the effects of inertia, ambivalence, and apathy.  More emotional emphasis is needed, more direct connection to individual outcomes, and more digestion into manageable chunks. This is true for elearning, learning, and  change.

In looking at attitude change, and from experience, I recognize that even if folks are committed to change, it can be easy to fall back into old habits without ongoing support.  Confusion in message, lack of emotional appeal, and idiosyncratic leadership only reduce the likelihood.  If it’s important, get alignment and sweat the details. If it’s not, why bother?

Social media budget line item?

13 February 2012 by Clark 3 Comments

Where does social media fit in the organization?  In talking with a social media entrepreneur over beers the other day, he mentioned that one of his barriers in dealing with organizations was that they didn’t have a budget line for social media software.

That may sound trivial, but it’s actually a real issue in terms of freeing up the organization. In one instance, it had been the R&D organization that undertook the cost.  In another case, the cost was attributed to the overhead incurred in dealing with a merger.  These are expedient, but wrong.

It’s increasingly obvious that it’s more than just a ‘nice to have’.  As I’ve mentioned previously, innovation is the only true differentiator.  If that’s the case, then social media is critical. Why?  Because the myth of individual innovation is busted, as clearly told by folks like  Keith Sawyer  and Steven Berlin Johnson.  So, if it’s not individual, it’s social, and that means we need to facilitate conversations.

If we want people to be able to work together to create new innovations, we don’t want to leave it to chance.  In addition to useful architectural efforts that facilitate in person interactions, we want to put in place the mechanisms to interact without barriers of time or distance.  Which means, we need a social media system.

It’s pretty clear that if you align things appropriately: culture, vision, tools, that you get better outcomes.  And, of course, culture isn’t a line item, and vision’s a leadership mandate.  But tools, well, they are a product/service, and need resources.

Which brings us to the initial point: where does this responsibility lie?  Despite my desire for folks who are most likely to understand facilitating learning (though that’s sadly unlikely in too many L& D departments), it could be IT, operations, or as mentioned above, R&D.  The point is, this is arguably one of the most important investments in the organization, and typically not one of the most expensive (making it the best deal going!). Yet there’s not a unified obvious home!

There are worries if it’s IT. They are, or should be, great at maintaining network uptime, but don’t really understand learning. Nor do the other groups, and yet facilitating the discussion in the network is the most important external role.  But who funds it?

Let’s be real; no one wants  to have to own the cost when there’re other things they’re already doing. But I’d argue that it’s the best investment an L&D organization could make, as it will likely have the biggest impact on the organization. Well, if you really are looking to move needles on key business metrics.  So, where do you think it could, and should reside?

 

 

Sharing Failure

26 January 2012 by Clark 4 Comments

I’ve earlier talked about the importance of failure in learning, and now it’s revealed that Apple’s leadership development program plays that up in a big way.  There are risks in sharing, and rewards. And ways to do it better and worse.

In an article  in Macrumors (obviously, an Apple info site), they detail part of Adam Lashinsky’s new Inside Apple  book that reports on Apple executive development program.  Steve Jobs hired a couple of biz school heavyweights to develop the program, and apparently “Wherever possible the cases shine a light on mishaps…”.  They use examples from other companies, and importantly, Apple’s own missteps.

Companies that can’t learn from mistakes, their own and others’, are doomed to repeat them.  In organizations where it’s not safe to share failures, where anything you say can and will be held against you, the same mistakes will keep getting made.  I’ve worked with firms that have very smart people, but their culture is so aggressive that they can’t admit errors.  As a consequence, the company continues to make them, and gets in it’s own way.  However, you don’t want to celebrate failure, but you do want to tolerate it. What can you do?

I’ve heard a great solution.  Many years ago now, at the event that led to Conner’s & Clawson’s Creating a Learning Culture, one small company shared their approach: they ring a bell not when the mistake is made, but when the lesson’s learned.  They’re celebrating – and, importantly,  sharing – the learning from the event.  This is a beautiful idea, and a powerful opportunity to use social media when the message goes beyond a proximal group.

There’s a lot that goes on behind this, particularly in terms of having a culture where it’s safe to make mistakes   Culture eats strategy for breakfast, as the saying goes..  What is  a problem is making the same mistake, or dumb mistakes.  How do you prevent the latter?  By sharing your thinking, or thinking out loud, as you develop your planned steps.

Now, just getting people sharing isn’t necessarily sufficient.  Just yesterday (as I write), Jane Bozarth pointed me towards an article in the New Yorker (at least the abstract thereof) that argues why brainstorming doesn’t work.  I’ve said many times that the old adage “the room is smarter than the smartest person in the room” needs a caveat:  if you manage the process right.  There are empirical results that guide what works from what doesn’t, such as: having everyone think on their own first; then share; focus initially on divergence before convergence; make a culture where it’s safe, even encouraged, to have a diversity of viewpoints; etc.

No one says getting a collaborating community is easy, but like anything else, there are ways to do it, and do it right.  And here too, you can learn from the mistakes of others…

Will tablets diverge?

25 January 2012 by Clark 2 Comments

After my post trying to characterize the differences between tablets and mobile, Amit Garg similarly  posted that tablets are different. He concludes that “a  conscious decision should be made when designing tablet learning (t-learning) solutions”, and goes further to suggest that converting elearning or mlearning directly may not make the most sense.  I agree.

As I’ve suggested, I think the tablet’s not the same as a mobile phone. It’s not always  with you, and consequently it’s not ready for any use.  A real mobile device is useful for quick information bursts, not sustained attention to the device.  (I’ll suggest that listening to audio, whether canned or a conversation, isn’t quite the same, the mobile device is a vehicle, not the main source of interaction.)  Tablets are for more sustained interactions, in general. While they can be used for quick interactions, the screen size supports more sustained interactions.

So when do  you use tablets?  I believe they’re valuable for regular elearning, certainly.  While you would want to design for the touch screen interface rather than mimic a mouse-driven interaction.  Of course, I believe you also should not  replicate the standard garbage elearning, and take advantage of rethinking the learning experience, as Barbara Means suggested in the SRI report for the US Department of Education, finding that eLearning was now superior to F2F.  It’s not because of the medium itself, but because of the chance to redesign the learning.

So I think that tablets like the iPad will be great elearning platforms. Unless the task is inherently desktop, the intimacy of the touchscreen experience is likely to be superior.  (Though more than Apple’s new market  move, the books can be stunning, but they’re not a full learning experience.)  But that’s not all.

Desktops, and even laptops don’t have the portability of a tablet. I, and others, find that tablets are taken more places than laptops. Consequently, they’re available for use as performance support in more contexts than laptops (and not as many as smart or app phones).  I think there’ll be a continuum of performance support opportunities, and constraints like quantity of information (I’d rather look at a diagram on a tablet) constraints of time & space in the performance context, as well as preexisting pressures for pods (smartphone or PDA) versus tablets will determine the solution.

I do think there will be times when you can design performance support to run on both pads and pods, and times you can design elearning for both laptop and tablet (and tools will make that easier), but you’ll want to do a performance context analysis as well as your other analyses to determine what makes sense.

 

 

Changing the Book game

20 January 2012 by Clark 2 Comments

I was boarding a plane away from home as Apple’s announcement was happening, so I haven’t had the chance to dig into the details as I normally would, but just the news itself shows Apple is taking on yet another industry. What Apple did to the music industry is  a closer analogy to what is happening here than what they did to the phone industry, however.

As Apple recreated the business of music publishing, they’re similarly shifting textbook publishing. They’ve set a price cap (ok, perhaps just for high school, to begin), and a richer target product. In this case, however, they’re not revolutionizing the hardware, but the user experience, as their standard has a richer form of interaction (embedded quizzes) than the latest ePub standard they’re building upon.  This is a first step towards the standard I’ve argued for, with rich embedded interactivity (read sims/games).

Apple has also democratized the book creation business, with authoring tools for anyone. They have kind of done that with GarageBand,  but this is easier.  Publishers will have the edge on homebrew for now, with a greater infrastructure to accommodate different state standards, and media production capabilities or relationships.  That may change,  however.

Overall, it will be interesting to see how this plays out. Apple, once again making life fun.

Stop creating, selling, and buying garbage!

12 January 2012 by Clark 14 Comments

I was thinking today (on my plod around the neighborhood) about how come we’re still seeing so much garbage elearning (and frankly, I had a stronger term in mind).  And it occurred to me that their are multitudinous explanations, but it’s got to stop.

One of the causes is unenlightened designers. There are lots of them, for lots of reasons: trainers converted, lack of degree, old-style instruction, myths, templates, the list goes on. You know, it’s not like one dreams of being an instructional designer as a kid.  This is not to touch on their commitment, but even if they did have courses, they’d likely still not be exposed to much about the emotional side, for instance. Good learning design is not something you pick up in a one week course, sadly.  There are heuristics (Cat Moore’s Action mapping, Julie Dirksen’s new book), but the necessary understanding of the importance of the learning design isn’t understood and valued.  And the pressures they face are overwhelming if they did try to change things.

Because their organizations largely view learning as a commodity. It’s seen as a nice to have, not as critical to the business.  It’s about keeping the cost down, instead of looking at the value  of improving the organization.  I hear tell of managers telling the learning unit “just do that thing you do” to avoid a conversation about actually looking at whether a course is the right solution, when they do try!  They don’t know how to hire the talent they really need, it’s thin on the ground, and given it’s a commodity, they’re unlikely to be willing to really develop the necessary competencies (even if they knew what they are).

The vendors don’t help. They’ve optimized to develop courses cost-effectively, since that’s what the market wants. When they try to do what really works, they can’t compete on cost with those who are selling nice looking content, with mindless learning design.  They’re in a commodity market, which means that they have to be efficiency oriented.  Few can stake out the ground on learning outcomes, other than an Allen Interactions perhaps (and they’re considered ‘expensive’).

The tools are similarly focused on optimizing the efficiency of translating PDFs and Powerpoints into content with a quiz. It’s tarted up, but there’s little guidance for quality.  When it is, it’s old school: you must have a Bloom’s objective, and you must match the assessment to the objective. That’s fine as far as it goes, but who’s pushing the objectives to line up with business goals?  Who’s supporting aligning the story with the learner? That’s the designer’s job, but they’re not equipped.  And tarted up quiz show templates aren’t the answer.

Finally, the folks buying the learning are equally complicit. Again, they don’t know the important distinctions, so they’re told it’s soundly instructionally designed, and it looks professional, and they buy the cheapest that meets the criteria.  But so  much is coming from broken objectives, rote understanding of design, and other ways it can go off the rails, that most of it is a waste of money.

Frankly, the whole design part is commoditized.  If you’re competing on the basis of hourly cost to design, you’re missing the point. Design is critical, and the differences between effective learning and clicky-clicky-bling-bling  are subtle.  Everyone gets paying for technology development, but not the learning design.  And it’s wrong.  Look, Apple’s products are fantastic technologically, but they get the premium placing by the quality of the experience, and that’s coming from the design.  It’s the experience and outcome that matters, yet no one’s investing in learning on this basis.

It’s all understandable of course (sort of like the situation with our schools), but it’s not tolerable.  The costs are high:meaningless  jobs, money spent for no impact, it’s just a waste.  And that’s just for courses; how about the times the analysis isn’t done that might indicate some other approach?  Courses cure all ills, right?

I’m not sure what the solution is, other than calling it out, and trying to get a discussion going about what really matters, and how to raise the game. Frankly, the great examples are all too few. As I’ve already pointed out in a previously referred post, the awards really aren’t discriminatory. I think folks like the eLearning Guild are doing a good job with their DevLearn showcase, but it’s finger-in-the-dike stuff.

Ok, I’m on a tear, and usually I’m a genial malcontent.   But maybe it’s time to take off the diplomatic gloves, and start calling out garbage when we see it.  I’m open to other ideas, but I reckon it’s time to do something.

Level of ‘levels’

10 January 2012 by Clark 11 Comments

I was defending Kirkpatrick’s  levels the other day, and after being excoriated by my ITA colleagues, I realized there was not only a discrepancy between principle and practice, but between my interpretation and as it’s espoused.  Perhaps I’ve been too generous.

The general idea is that there are several levels at which you can evaluate interventions:

  1. whether the recipient considered the intervention appropriate or not
  2.  whether the recipient can demonstrate new ability after the intervention
  3. whether the intervention is being applied in the workplace, and
  4. whether the intervention is impacting desired outcomes.

That this is my  interpretation became abundantly clear.  But let’s start with what’s wrong in practice.

In practice, first, folks seem to think that just doing level 1 (‘smile sheets’) is enough. Far fewer people take the next logical step and assess level 2. When they do, it’s too often a knowledge test.  Both of these fail to understand the intention: Kirkpatrick (rightly) said you have  to start at level 4. You have  to care about a business outcome you’re trying to achieve, and then work backwards: what performance change in the workplace would lead to the desired outcome. Then, you can design a program to equip people to perform appropriately and determine whether they can, and finally see if they like it.  And, frankly, level 1 is useless until you finally have had the desired impact, and then care to ensure a desirable user experience.  As a standalone metric, it ranks right up there with measuring learning effectiveness by the pound of learners served.

Now, one of the things my colleagues pointed out to me, beyond the failure in implementation, is that Kirkpatrick assumes  that it has to be a course.  If it’s just misused, I can’t lay blame, but my colleagues proceeded to quote chapter and verse from the Kirkpatrick site to document that the Kirkpatricks  do think courses are the  solution. Consequently, any mention of Kirkpatrick only reinforces the notion that courses are the salve to all ills.

Which I agree is a mindset all too prevalent, and so we have to be careful of any support that could lead a regression to the status quo.  Courses are fine when you’ve determined that a skill gap is the problem.  And then, applying Kirkpatrick starting with Level 4  is appropriate.  However, that’s more like 15% of the time, not 100%.

So where did I go wrong?  As usual, when I look at models, I abstract to a useful level (my PhD focused on this, and Felice Ohrlich did an interesting study that pointed out how the right level of abstraction is critical).  So, I didn’t see it tied to courses, but that it could in principle be used for performance support as well (at least, levels 3 and 4).  Also for some social learning interventions.

Moreover, I was hoping that by starting at level 4, you’d look to the outcome you need, and be more likely to look at other solutions as well as courses.  But I had neglected to note the pragmatic issue that the Kirkpatrick’s imply courses are the only workplace intervention to move the needles, and that’s not good.  So, from now on I’ll have to be careful in my reference to Kirkpatrick.

The model of assessing the change needed and working backward is worthwhile, as is doing so systematically.  Consequently, at an appropriate level of abstraction, the model’s useful.  However, in it’s current incarnation it carries too much baggage to be recommended without a large amount of qualification.

So I’ll stick to talking about impacting the business, and determining how we might accomplish that, rather than talk about levels, unless I fully qualify it.

Performance Architecture

6 January 2012 by Clark 3 Comments

I’ve been using the tag ‘learning experience design strategy’ as a way to think about not taking the same old approaches of events  Ã¼ber ales.  The fact of the matter is that we’ve quite a lot of models and resources to draw upon, and we need to rethink what we’re doing.

The problem is that it goes far beyond just a more enlightened instructional design, which of course we need.  We need to think of content architectures, blends between formal and informal, contextual awareness, cross-platform delivery, and more.  It involves technology systems, design processes, organizational change, and more.  We also need to focus on the bigger picture.

Yet the vision driving this is, to me, truly inspiring: augmenting our performance in the moment and developing us over time in a seamless way, not in an idiosyncratic and unaligned way.  And it is strategic, but I’m wondering if architecture doesn’t better capture the need for systems and processes as well as revised design.

This got triggered by an exercise I’m engaging in, thinking how to convey this.  It’s something along the lines of:

The curriculum’s wrong:

  • it’s not knowledge objectives, it’s skills
  • it’s not current needs, it’s adapting to change
  • it’s not about being smart, it’s about being wise

The pedagogy’s wrong:

  • it’s not a flood, but a drip
  • it’s not knowledge dump, it’s decision-making
  • it’s not expert-mandated, instead it’s learner-engaging
  • it’s not ‘away from work’, it’s in context

The performance model is wrong:

  • it’s not all in the head, it’s distributed across tools and systems.
  • it’s not all facts and skill, it’s motivation and confidence
  • it’s not independent, it’s socially developed
  • it’s not about doing things right, it’s about doing the right thing

The evaluation is wrong:

  • it’s not seat time, it’s business outcomes
  • it’s not efficiency, at least until it’s effective
  • it’s not about normative-reference, it’s about criteria

So what does  this look like in practice?   I think it’s about a support system organized so that it recognizes what you’re trying to do, and provides possible help.  On top of that, it’s about showing where the advice comes from, developing understanding as an additional light layer.  Finally, on top of that, it’s about making performance visible and looking at the performance across the previous level, facilitating learning to learn. And, the underlying values are also made clear.

It doesn’t have to get all that right away.  It can start with just better formal learning design, and a bit of content granularity. It certainly starts with social media involvement.  And adapting the culture in the org to start developing meta-learning.  But you want to have a vision of where you’re going.

And what does it take to get here?  It needs a new design that starts from the performance gap and looks at root causes. The design process then onsiders what sort of experience would both achieve the end goal and the gaps in the performer equation (including both technology aids and knowledge and skill upgrades), and consider how that develops over time recognizing the capabilities of both humans and technology, with a value set that emphasis letting humans do the interesting work.  It’ll also take models of content, users, context, and goals, with a content architecture and a flexible delivery model with rich pictures of what a learning experience might look like and what learning resources could be.  And an implementation process that is agile, iterative, and reflective, with contextualized evaluation.  At least, that sounds right to me.

Now, what sounds right to you: learning experience design strategy, performance system design, performance architecture, <your choice here>?

 

Further (slow) thoughts on learning #change11

9 December 2011 by Clark 4 Comments

I’ve been monitoring the comments on my #change11 posts, and rather than address them individually, I’m posting responses.  So, a couple of questions have recurred about the slow learning concept.  One is how the notion of quick small bites reflects a slower learning process.  Another is how it might play out in the organization.  And a final one is about the overall pedagogy.

embedded learning versus event learningTo address the first one, the notion is that the learnings are wrapped around the events in your life, not where you’re taken away from the context of your life to have a learning experience.  I think of this as embedded learning versus event learning.  Yes, it’s quick bits, but they don’t mean as much on their own as in their cumulative effect over time.  Whereas the cumulative effect of the event model dissipates quickly, the distributed model builds slowly!

To address the pedagogy, it’s about having little bits of extra information that connect to the events in your life, not separate (unless the events in your life aren’t frequent enough, and then we might create little ‘alternate reality’ events that create plausible and fun scenarios that also provide the desired practice to develop you on the path.  It’s not breaking up event-based learning into smaller chunks so much as wrapping around the meaningful events in your life, when possible.

And that pedagogy will very much be our choice.  I do hope we can take the opportunity to include a sufficient level of challenge, and the opportunity to personalize it, rather than keep it generic. Consider minimalist approaches, weave in learning-to-learn, connecting people as well as providing additional information.  For instance, we should be asking personalization questions afterward (whether via system or person).  The algorithms hopefully will have some serendipity as well as relationships to my personal experience.  Some elliptical material. This would support discovering new relationships in learning, as well, as we mine the effects of some random juxtapositions across many experiences.

How to make this practical in organizations worried about immediate productivity?  In my experience, it’s already happening. Folks are (trying) to take responsibility for their learning.  They take the social media cigarette breaks to go out and connect to their networks when the office blocks access through the firewall.  They’re discussing work topics in LinkedIn groups, and using Twitter to both track new things and to get questions answered.  The question really is whether orgs will ignore or hamper this versus facilitating it.  That’s why I’m part of the Internet Time Alliance, where we are working with organizations to help them start supporting learning, not just offering training.

We do see small bits of moves toward slow learning, but I don’t like to assume everyone’s yet capable of taking ownership of it. And, yes, the sad state of the world is that typical schooling and old-style management can squelch the love of learning and not develop the skills that are needed.  We have multiple challenges, and I’m just suggesting the concept of slow learning, a drip-irrigation versus flood metaphor, is a wedge to help drive us out of the event-based model and start addressing the issues raised: pedagogy, curricula, infrastructure, technology, politics, and more.  The efforts to build such a system, I reckon, will force progress on many fronts.  Whether it’s the best  approach to do that is a separate question. I welcome your thoughts.

And a thanks to all for their participation this week, it’s been a learning experience for me as well!

Making Slow Learning Concrete #change11

7 December 2011 by Clark 5 Comments

It occurs to me that I’ve probably not conveyed in any concrete terms what I think the ‘slow learning’ experience might be like. And I admit that I’m talking a technology environment in the concrete instance (because I like toys).  So here are some instances:

Say you’ve a meeting with a potential client.  You’ve been working on how to more clearly articulate the solutions you offer and  listening to the customer to establish whether there’s a match or not. You’ve entered the meeting into your calendar, and indicated the topic by the calendar, tags, the client, or some other way. So, shortly before the meeting, your system might send you some reminder that both reiterated the ‘message’ you’d worked out, and reminded you about pulling out the client’s issues.  Then, there might be a tool provided during the meeting (whether one you’d created, one you’d customized, or a stock one) to help capture the important elements. Afterwards, the system might provide you with a self-evaluation tool, or even connect you to a person for a chat.

Or, say, you’re walking around a new town.  Your system might regularly suggest some topics of interest, depending on  your interest showing architecture, history, or socioeconomic indicators.  You could ignore them, or follow them up.  Ideally, it’d also start connecting some dots: showing a picture from a previous trip and suggesting “Remember we saw an example of <this> architect use here?  Well, right here we have the evolution of that form; see how the arches have…”  So it’s making connections for you.  You can ignore, pursue further, or whatever. It might make a tour for you on the fly, if you wanted.  If you were interested in food, it might say: “we’ve been exploring Indian food, you apparently have no plans for dinner and there’s an Indian restaurant near here that would be a way to explore Southern Indian cuisine”.

Another situation might be watching an event, and having extra information laid on top. So instead of just watching a game, you could see additional information that is being used by the coaches to make strategic decisions: strengths and weaknesses of the opposing team in this context, intangible considerations like clock management, or the effects of wind.

And even in formal schooling, if you’re engaged in either an individual or group problem, it might well be available to provide a hint, as well as of course tools to hand.

The notion is that you might have more formal and informal goals, and the system would layer on information, augmenting your reality with extra information aligned to your interests and goals, making the world richer.  It could and would help performance in the moment, but also layer on some concepts on top.

I see this as perhaps a mobile app, that has some way of notifying you (e.g. it’s own signature ‘sound’), a way to sense context, and more. It might ask for your agreement to have a link into the task apps you use, so it has more context information, but also knows when and where you are.

This isn’t the only path to slow learning.  Ideally, it’d just be a rich offering of community-generated resources and people to connect with in the moment, but to get people ready to take advantage of that it might need some initial scaffolding.  Is this making sense?

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok