Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Conceptualizing the Performance Ecosystem

9 April 2009 by Clark 3 Comments

elearningvaluenet.jpgSo I’ve been playing with rethinking my Performance Ecosystem conceptualization and visualization.   The original had very discrete components, and an almost linear path, and that doesn’t quite convey the reality of how things are tied together. I believe it’s useful to help people see the components, but it doesn’t capture the goal of an integrated system.

I’ve been wrestling with my diagramming application (OmniGraffle) to rethink it.   My   notion is that systems, e.g. content/knowledge management/learning management systems underpin the learnscape, and that on top exist formal learning, performance support like job aids organized into portals, and social media.   Mobile is a layer that floats on top, making contextually accessible the capabilities assembled below.   It’s not perfect, but it’s an evolving concept (perpetual beta, right/).

Strategic LayersSo here’s my current conception.   It took me a long time to create the circle with different components!   First I had to discover that there were tools to create freeform shapes, and then work to get them to articulate, but I like the kind of ‘rough’ feel of it (appropriate for it’s stage).

It also captures the conceptual relationships as spatial relationships (my principle for diagram creation).   At least for me.   So here’s the question: does it make sense for you?   Does it help you perceive what I’m talking about, or is it too a) coarse, b) confusing, or c) some other problem?   I welcome your feedback!

Model learning

8 April 2009 by Clark 4 Comments

On Monday, a hearty Twitter exchange emerged when Jane Bozarth quoted Roger Schank “Why do we assume that theories of things must be taught to practitioners of those things?”   I stood up for theory, Cammy Bean and Dave Ferguson chimed in and next thing you know, we’re having a lively discussion in 140 characters.   With all the names to include, Dave pointed out we had even less space!

One side was stoutly defending that what SMEs thought was important wasn’t necessarily what practitioners needed.   The other side (that would be me) wanted to argue that it’s been demonstrated that having an underlying model is important in being able to deal with complex problems.

So, of course, the issue really was what we mean by theory.   It’s easy (and correct) to bash conceptual knowledge frameworks that don’t have applicability to the problem at hand; Dave revived the great quote: “In theory, there’s no difference between theory and practice. But in practice, there is.” He also cited Van Merrienboer & Kirschner as saying that teaching theory to successful practitioners can be detrimental. (BTW, see Dave’s great series of posts ‘translating‘ their work.) On the other hand, having models has clearly been shown to be valuable in adapting to complexity and ambiguity.   What’s a designer to do?

So, let me be clear.   If there’s a rote procedure to be followed, there’s no need for a theory.   In fact, there’s no need for training, since you ought to automate it!   Our brains are good at pattern matching, bad at rote repetition, and it seems to me to be sad if not criminal to have people do rote stuff that could be done better by machine; save the interesting and challenging tasks for us!

It’s when tasks are complex, ill-structured, and/or ambiguous with lots of decisions, that we need theories.   Or, rather, models.   Which, I think, is part of the confusion (and I may be to blame! :).

I’m   talking about an understanding of the underlying model that guides performance.   Any approach to a problem has (or should) a rationale behind it about why that’s the reason you do it this way, not that way.   It’s based upon some theory, but it should be resolved into a model that has just enough richness to help you decide when to do X and when to do Y. As I said many years ago:

I see mental models as dynamic.   That is, they’re causal explanations of system behaviour.   They are used to explain observed outcomes and to predict the effects of perturbations.

It’s the explanation and prediction capabilities that are important.   The problem is, if the situation’s complex enough (and most are, whether it’s controlling a production line, or dealing with a customer, or…), you can’t train on all the situations that a learner might face.   So then you need to provide guidance.   Yes, we’ll use example and practice context to support transfer, but we should refer back to a model that guides our performance. And that’s useful and necessary.

Cammy noted that it’s extra work to develop that model, and I acknowledge that.   I’ve said that good instructional design requires more work and knowledge on the part of the designer than we typically expect, which is why I don’t think you can do good ID without knowing some learning theory. (BTW, my Broken ID series addresses a lot of the above.)

So, let me be clear: in any reasonably complex domain (and you shouldn’t be training for simple issues: just give a job aid or automate or…), you should present the learner with a model that you reinforce in examples and practice.   It should not be an abstract academic theory, but a practical guide to why things are done this way and what governs the adaptation to circumstances.   As that model is acquired through examples and practice, you provide the basis for self-improving performance.

That’s my model for designing effective learning.   What’s yours?

On a side note, what I recall as to the various tweets, and what Twitter shows from each person, doesn’t have a perfect correlation.   While I acknowledge my memory failing more frequently (just age, not dementia or Alzheimer’s, I *think*), I’m pretty sure that Twitter dropped some of those messages from the record (the same time they acknowledged having trouble with dropping avatar images).   Tweeter beware!

Live Long

31 March 2009 by Clark 2 Comments

The controversy surrounding the formal/informal roles has suddenly created a flurry of excitement around a post on eLearn Mag.   However, I’ve addressed it over at the TogetherLearn site, as it seemed somewhat appropriate to respond from the perspective of a champion of social and informal learning.

In short, I point to the issues covered in the Broken ID series, and say that formal instruction isn’t the greatest thing to champion in it’s current form.   It may persist, but hopefully in a far better state than most formal we see today.   No one’s championing the demise of formal, but certainly improvement, and in conjunction with informal, not as a single solution.

Transformative Experience Design

28 March 2009 by Clark 6 Comments

As part of the continual rethink about what I offer and to who (e.g. training department rethinks to managers, directors, VPs; experience design reviews/refines to learning teams), my thoughts on learning experience design took a leap.   I’ve argued that the skills in Engaging Learning (my book) are the ones that are critical for Pine & Gilmore’s next step beyond their experience economy, the transformative experience economy. But I’ve started to think deeper.

John Seely Brown challenged us at the Learning Irregulars meeting that what fundamentally made a difference was a ‘questing disposition’ found in certain active learning communities.   This manifests as an orientation to experimentation and learning. My curiosity was whether it was capable of being developed, as I’m loath to think that the 10% that learn despite schooling :) is inflexible because I believe that more and better learning has a chance to change our world for the better.

I hadn’t finished the article he subsequently sent me (coming soon), but it drove me back to some early thinking on attitude change.   I recognize that just learning skills aren’t enough, and that a truly transformative experience subjectively needs to result in a changed worldview, a feeling of new perspectives.   This could be a change in attitude, a new competency, or a fundamental change in perspective.

Which brings me back to looking at myth and ritual, something I tried to get my mind around before. I was looking for the Complete Idiot’s Guide to Ritual, and the closest thing I could find is Rapport’s Ritual and Religion in the Making of Humanity, which is almost impenetrably dense (and I’m trained and practiced at reading academic prose!).   However, the takeaway is that ritual is hard to design, most artificial attempts fail miserably.

Others have suggested that transformation is at core about movement, which takes me back to ritual.   Both a search on transformation and a twitter response brought that element to the surface.   The other element that the search found was spirituality (not just religious).   Which is not surprising, but not necessarily useful.

Naturally, I fall back to thinking from the perspective of creating an experience that will yield that transformational aesthetic, but it’s grounded in intuition rather than any explicit guidance. Still, I think there’s something necessary in the perspective that skills alone isn’t enough, and as I said before, as much of our barriers may be attitude or motivation as knowledge and skills.

I’ve skimmed ahead in JSB’s article, and can see I need a followup post, but in the interim, I’d welcome your thoughts on designing truly transformative experiences, not just learning experiences.

Monday Broken ID Series: Process

22 March 2009 by Clark 5 Comments

Previous Series Post

This is the last formal post in a series of thoughts on some broken areas of ID that I’ve been posting for Mondays.   The intention is to provide insight into many ways much of instructional design fails, and some pointers to avoid the problems. The point is not to say ‘bad designer’, but instead to point out how to do better design.

We’ve been talking about lots of ways instructional design can be wrong, but if that’s the case, the process we’re using must be broken too.   If we’re seeing cookie-cutter instructional design, we must not be starting from the right point, and we must be going about it wrong.

Realize that the difference between really good instructional design, and ordinary or worse, is subtle.   Way too often I’ve had the opportunity to view seemingly well-produced elearning that I’ve been able to dismantle systematically and thoroughly.   The folks were trying to do a good job, and companies had paid good money and thought they got their money’s worth.   But they really hadn’t.

It’d be easy to blame the problems on tight budgets and schedules, but that’s a cop-out.   Good instructional design doesn’t come from big budgets or unlimited timeframes, it comes from knowing what you’re doing.   And it’s not following the processes that are widely promoted and taught.

You know what I’m talking about – the A-word, that five letter epithet – ADDIE.   Analysis, Design, Development, Implementation, and Evaluation.   A good idea, with good steps, but with bad implementation.   Let me take the radical extreme: we’re better off tossing out the whole thing rather than continue to allow the abominations committed under that banner.

OK, now what am I really talking about?   I was given a chance to look at an organization’s documentation of their design process.   It was full of taxonomies, and process, and all the ID elements.   And it led to boring, bloated content.   If you follow all the procedures, without a deep understanding of the underpinnings that make the elements work, and know what can be finessed based upon the audience, and add the emotional elements that instructional design largely leaves out (with the grateful exception of Keller’s ARCS model).

The problem is that more people are doing design than have sufficient background, as Cammy Bean’s survey noted.   Not that you have to have a degree, but you do have to have the learning background to understand the elements behind the processes.   Folks are asked to become elearning designers and yet haven’t really had the appropriate training.

Blind adherence to ADDIE will, I think, lead to more boring elearning than someone creative taking their best instincts about how to get people to learn.   Again, Cathy Moore’s Action Mapping is a pretty good shortcut that I’ll suggest will lead to better outcomes than ADDIE.

Which isn’t to say that following ADDIE when you know what you’re doing, and have a concern for the emotional and aesthetic side (or a team with same), won’t yield a good result, it will.   And, following ADDIE likely will yield something that’s pretty close to effective, but it’s so likely to be undermined by the lack of engagement, that there’s a severe worry.

And, worse, there’s little in their to ensure that the real need is met, asking the designer to go beyond what the SME and client tells you and ensure that the behavior change is really what’s needed.   The Human Performance Improvement model actually does a better job at that, as far as I can tell.

It’s not hard to fix up the problem.   Start by finding out what significant decision-making change will impact the organization or individual, and work backward from there, as the previous posts have indicated. I don’t mean to bash ADDIE, as it’s conceptually sound from a cognitive perspective, it just doesn’t extend far enough pragmatically in terms of focusing on the right thing, and it errs too much on the side of caution instead of focusing on the learner experience.It’s not clear to me that ADDIE will even advocate a job aid, when that’s all that’s needed (and I’m willing to be wrong).

Our goal is to make meaningful change, and that’s what we need to do.   I hope this series will enable you to do more meaningful design.   There may be more posts, but I’ve exhausted my initial thoughts, so we’ll see how it goes.

Monday Broken ID Series: Seriation

15 March 2009 by Clark 3 Comments

Previous Series Post | Last Series Post

This is one in a series of thoughts on some broken areas of ID that I’ve been posting for Mondays.   The intention is to provide insight into many ways much of instructional design fails, and some pointers to avoid the problems. The point is not to say ‘bad designer’, but instead to point out how to do better design.

Instructional design has established that the correct order of elements is introduction – concept – example – practice (and feedback) – summary.   While that’s a good default, it doesn’t have to be that way, and there are times when it makes sense to provide other approaches or even self-navigation.   What we shouldn’t see is the prevalent (click to advance ‘next’ button), with linear navigation forwards and back.   Or, rather, we shouldn’t see that without some other support.   And more.

mediaskillsnavWhen we did a course on speaking to the media (and without an LMS to handle the navigation, so no built-in ‘next button’), we had a scheme that both provided a good default, and allowed self-navigation.   We had the elements of each of the 3 modules labeled from a learner perspective (e.g. Show Me, Let Me). And we had a nav bar in the upper left that let you choose where to go. At the bottom of the screen (we erred for scrolling rather than one page to minimize clicks and load times, this was over 10 years ago) were also some options of where to go next, with one indicated as the recommended choice.   We graphically supported this with a dotted   line leading the learner through the content and to the default choice (follow the bouncing ball).

Was there benefit from this?   Anecdotaly, I heard (I’d returned to the US) that about half the users followed the bouncing ball, but the other half (presumably the self-capable learners) took the initiative for their own learning and used the nav bar to go where and when *they* wanted to.   I note that UNext/Cardean had a similar nav structure at one time.

Now, you may have heard of case-, problem- or project-based learning. In this case, before you present the concept, you present either an example (a case-study) or a problem.   These serve as the introduction, but are attuned to different ways of learning.

If you buy into some of the learning style models, they have cycles through different learning approaches, but recognize that different learners could prefer to start in different areas.   That was the premise that drove at least part of the strategy behind the adaptive learning system project I led from 1999-2001. We had the system   recommend a path, and alternatives, but it was based upon who they were as a learner.

It turns out that some learners could prefer an example first, that links concept to context, some prefer problems first, to get concrete about what the situation’s about, and some might prefer a more typical approach.   We didn’t have all the answers at the time, but we had a good set of rules, and were going to extract better ones as we went along.

The point is, while a good default is a reasonable choice, having some alternative paths might be worth considering, and allowing learner navigation is almost essential.     Allowing learners to test out is a good option as well.   Don’t lock your learners into a linear experience, unless you’ve really designed it as an experience, focusing on the overall flow and testing and refining until your learners tell you it is an experience.   And I do recommend that, it’s not as tough as it sounds.   However, don’t take just the easy default, learners prefer and deserve choice.   So consider some alternative pedagogies, consider the learner, and think outside the line.

A wee bit o’ experience…

11 March 2009 by Clark 1 Comment

A personal reflection, read if you’d like a little insight into what I do, why and what I’ve done.

Reading an article in Game Developer about some of the Bay Area history of the video game industry has made me reflective.   As an undergrad (back before there really were programs in instructional technology) I saw the link between computers and learning, and it’s been my life ever since.   I designed my own major, and got to be part of a project where we used email to conduct classroom discussion, in 1978!

Having called all around the country to find a job doing computers and learning,   I arrived in the Bay Area as a ‘wet behind the ears’ uni graduate to design and program ‘educational’ computer games.   I liked it; I said my job was making computers sing and dance.   I was responsible for FaceMaker, Creature Creator, and Spellicopter (among others) back in 81-82.   (So, I’ve been designing ‘serious games’, though these were pretty un-serious, for getting close to 30 years!)

I watched the first Silicon Valley gold rush, as the success of the first few home computers and software had every snake oil salesman promising that they could do it too.   The crash inevitably happened, and while some good companies managed to emerge out of the ashes, some were trashed as well.   Still, it was an exciting time, with real innovation happening (and lots of it in games; in addition to the first ‘drag and drop’ showing up in Bill Budge’s Pinball Construction Set, I put windows into FaceMaker!).

I went back to grad school for a PhD in applied cog sci (with Don Norman), because I had questions about how best to design learning (and I’d always been an AI groupie :).   I did a relatively straightforward thesis, not technical but focused on training meta-cognitive skills, a persistent (and, I argue, important) interest.   I looked at all forms of learning; not just cognitive but behavioral, ID, constructivist, connectionist, social, even machine learning.   I was also getting steeped in applying cognitive science to the design of systems, and of course hanging around the latest/coolest tech.   On the side, I worked part-time at San Diego State University’s Center for Research on Mathematics and Science Education working with Kathy Fischer and her application SemNet.

My next stop was the University of Pittsburgh’s Learning Research & Development Center for a post-doctoral fellowship working on a project about mental models of science through manipulable systems, and on the side I designed a game that exercised my dissertation research on analogy (and published on it).   This was around 1990, so I’d put a pretty good stake in the ground about computer games for deep thinking.

In 1991 I headed to the Antipodes, taking up a faculty position at UNSW in the School of Computer Science, teaching interface design, but quickly getting into learning technology again.   I was asked, and I supervised a project designing a game to help kids (who grow up without parents) learn to live on their own. This was a very serious game (these kids can die because they don’t know how to be independent), around 1993.   As soon as I found out about CGIs (the first ‘state’-maintaining technology) we ported it to the web (circa 1995), where you can still play it (the tech’s old, but the design’s still relevant).

I did a couple other game-related projects, but also experimented in several other areas.   For one, as a result of looking at design processes,   I supervised the development of a web-based performance support system for usability, as well as meta-cognitive training and some adaptive learning stuff.

I joined a government-sponsored initiative on online learning, determining how to run an internet university, but the initiative lost out to politics.   I jumped to another, and got involved in developing an online course that was too far ahead of the market (this would be about 1996-1997).   The design was lean, engaging, and challenging, I believe (I shared responsibility), and they’re looking at resurrecting it now, more than 10 years later!   I returned to the US to lead an R&D project developing an intelligent learning system based on learning objects that adapted on learner characteristics (hence my strong opinions on learning styles), which we got up and running in 2001 before that gold rush went bust.   Since then, I’ve been an independent consultant.

It’s been interesting watching the excitement around serious games.   Starting with Prensky, and then Aldrich, Gee, and now a deluge, there’s been a growing awareness and interest; now there are multiple conferences on the topics, and new initiatives all the time.   The folks in it now bring new sensibilities, and it’s nice to see that the potential is finally being realized. While I’ve not been in the thick of it, I’ve quietly continued to work, think, and write on the issue (thanks to clients, my book, and the eLearning Guild‘s research reports).   Fortunately, I’ve kept from being pigeonholed, and have been allowed to explore and be active in other areas, like mobile, advanced design, performance support, content models, and strategy.

The nice thing about my background is that it generalizes to many relevant tasks: usability and user experience design and information design are just two, in addition to the work I cited, so I can play in many relevant places, and not only keep up with but also generate new ideas.   My early technology experience and geeky curiosity keeps me up on the capabilities of the new tools, and allows me to quickly determine their fundamental learning capabilities.   Working on real projects, meeting real needs, and ability to abstract to the larger picture has given me the ability to add value across a range of areas and needs.   I find that I’m able to quickly come in and identify opportunities for improvement, pretty much without exception, at levels from products, through processes, to strategy.   And I’m less liable to succumb to fads, perhaps because I’ve seen so many of them.

I’m incredibly lucky and grateful to be able to work in the field that is my passion, and still getting to work on cool and cutting edge projects, adding value.   You’ll keep seeing me do so, and if you’ve an appetite for pushing the boundaries, give me a holler!

Monday Broken ID Series: Summaries

8 March 2009 by Clark 1 Comment

Previous Series Post | Next Series Post

This is one in a series of thoughts on some broken areas of ID that I’m posting for Mondays.   The intention is to provide insight into many ways much of instructional design fails, and some pointers to avoid the problems. The point is not to say ‘bad designer’, but instead to point out how to do better design.

When it comes to closing the elearning experience, not surprisingly too often we drop the ball here, too.   Our endings tend to be too abrupt, and merely rehash what has been learned, and, if we’re lucky, point out further directions. Not that we don’t want to let them know what they‘ve learned, and indicate that if they want to go deeper, they should go here, and they’re now prepared to learn about this thing over there.   But there’s so much more!

First of all, if we’re viewing this as an experience, developing motivation and addressing the emotional components, and we should be, then we should close off the experience emotionally as well.   We should acknowledge the effort they’ve put in, and celebrate the fact that they‘ve learned the ability to do something new (and it should be do something new, if you‘ve got your objectives right).

Ideally, we’d personalize this, and say something like”you did really good on A, but your B was a little weak, try a bit of C to build that up” or whatever.   We don’t always have the ability to track performance at this more granular level, nor the ability to make the learning content adapt in that way, but it’s conceptually feasible and you should be thinking about how you might accomplish that.

Also, in the introduction, we drilled down from the larger context in the world (right?), and we should similarly drill back up.   Let‘s reconnect the learner with the broader context, and reactivate and associate the learning experience by letting them know how what they now can do plays a role in the world.   It‘s not just “you learned X”, but “you learned X, which means Y”.

Finally, let me add a valuable lesson I learned.   I was working on some content for speaking to the media, and the SMEs (hello, Jane & Susan!) had a nice statement format that worked really well (with a memorable acronym: the SEX statement – Statement, Examples, eXplanation – I’ve never forgotten it :).   However, they realized that the opportunities to apply it might be few and far-between, so they encouraged ways to practice it.   They suggested using it with co-workers, bosses, even your kids!

The important point was the effort they put in to help you keep it active until you needed it, and that‘s too often an element we forget.   We can and should stream out reactivations at a rate that is appropriate for how soon and how often we’ll apply the skills, but our decision about how to support the learner’s retention should be conscious and related to their task and practice opportunities.

Note that this can and should be all done in a minimum amount of words.   It doesn’t take much, a sentence or two at most, unless it‘s been a big elearning experience, but it is appropriate.

So, in summary, make sure you wrap up the learning experience with the same care that you began it. Make it an experience to be remembered!

Focusing on the Do: Moore’s Action Mapping

4 March 2009 by Clark 6 Comments

Cathy Moore has a lovely post with a slideshow that talks about using action mapping to design better elearning, and it’s a really nice approach.   While I don’t know from Action Mapping (tm?), I do know that the approach taken avoids the typical mistakes and focuses on the same thing I advocate: what do people need to be able to do?

The presentation rightly points out the problems with knowledge dump, and instead focuses on the business goal first, and then asks you to map out what the learner would need to be able to do to achieve that business goal.   That’s the point I was making in my ‘objectives‘ post of the Broken ID series.

Cathy nicely elaborates on that point, going directly to practice that has them doing the task, as close as possible to the real task.   Finally, she has you bring in the minimum information needed to allow them to do the task.   This is really a great ‘least assistance‘ approach!

Now, it’s not talking about examples or models (though those could fit under the minimum information principle, above), nor introducing the topic, so I’d want to ensure that the learners are engaged into the learning experience up-front, and provide a model to guide their performance in the task.   What this does, however, is give you a framework and set of steps that really focuses on the important elements and avoiding the typical approach that is knowledge-full and value-light.   Recommended.

Workplace Learning in 10 years?

2 March 2009 by Clark 3 Comments

This month’s Learning Circuit’s blog Big Question is “What will workplace learning look like in 10 years”.   Triggered by Jay & Harold’s post and reactions (and ignoring my two related posts on Revisiting and Learning Design), it’s asking what the training department might look like in 10 years.   I certainly   have my desired answer.

Ideally, in 10 years the ‘training department’ will be an ‘organizational learning’ group, that’s looking across expertise levels and learning needs, and responsible for equipping people not only to come up to speed, but to work optimally, and collaborate to innovate.   That is, will be responsible for the full performance ecosystem.

So, there may still be ‘courses’, though they’ll be more interactive, more distributed across time, space, and context.   There’ll be flexible customized learning paths, that will not only skill you, but introduce you into the community of practice.

Learning/Information/Experience DesignHowever, the community of practice will be responsible for collaboratively developing the content and resources, and the training department will have morphed into learning facilitators: refining the learning, information, and experience design around the community-established content, and also facilitating the learning skills of the community and it’s members.   The learning facilitators will be monitoring the ongoing dialog and discussions, on the lookout for opportunities to help capture some outcomes, and watching the learners to look for opportunities to develop their abilities to contribute.   They’ll also be looking for opportunities to introduce new tools that can augment the community capabilities, and create new learning, communication, and collaboration channels.

Their metrics will be different, not courses or smile sheets, but value added to the community and it’s individuals, and impact on the ability of the community to be effective.   The skill sets will be different too: understanding not just instructional but information and experience design, continually experimenting with tools to look for new augmentation possibilities, and having a good ability to identify and facilitate the process of knowledge or concept work, not just the product.

10 years from now the tools will have changed, so it may be that some of the tasks can be automated, e.g. mining the nuggets from the informal channels, but design & facilitation will still be key.   We’ll distribute the roles to the tools, leaving the important pattern matching to the facilitators.

At least, that’s what I hope.

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok