Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Where do comics/cartoons fit?

31 May 2016 by Clark Leave a Comment

I’ve regularly suggested that you want to use the right media for the task, and there are specific cognitive properties of media that help determine the answer.  One important dimension is context versus concept, and another is dynamic versus static.  But I realized I needed to extend it.

MediaPropertiesNewTo start with, concepts are relationships, such as diagrams (as this one is!).  Whereas context is the actual setting. For one, you want to abstract away, for the other you want to be concrete.  Similarly, some relationships, and settings, are static, whereas others are dynamic. Obviously, here we’re talking static relationships, but if we wanted to illustrate some chemical process, we might need an animation.

So, for contextualization, we can use a photo capturing the real setting. Unless, of course, it’s dynamic and we need a video. Similarly, if we need conceptual relationships, we use a diagram, unless again if it’s dynamic and we need an animation. (By animation, I mean a dynamic diagram, not a cartoon, just as a video is a dynamic recording of a live setting, not a cartoon.)

Audio’s a funny case, in that it can be static as text or dynamic as audio. The needs change depending on where you need your attention represented: you can’t (and shouldn’t) put static text on a dynamic visual, and you can’t use video if the attention can’t be visually distracted. Audio is valuable when you can’t take your eyes away (e.g. the audio guidance on a GPS, “now turn left”).

Note that there are halfway points. You can capture a sequence of static images in lieu of a video (think narrated slide show).  Similarly, a diagram could be shown in multiple states.  And this is all ignoring interactives.  But there’s a particular place I want to go, hinted above.

I was reflecting that comics (static) and cartoons (dynamic) are  instances that don’t naturally fall out of my characterization, and realized I needed a way to consider  them.    I posit that comics/cartoons are halfway between context and concept.  They strip away unnecessary context, so that it’s easier to see what’s important, and have the potential (via, say, thought balloons) to annotate the world with the concept.  So they’re semi-conceptual, and semi-contextual.  I’ve regularly argued that we don’t use them often enough for a number of reasons, and it’s important to think where they fit.

This is my proposal: that they help focus attention on important elements without unnecessary details and the ability to elaborate (as well as the rest of the benefits: familiarity, bandwidth, etc).  So, what do you say?  Does this fit and make sense?  Are you going to use more comics/graphic novels/cartoons?

Heading in the right direction

26 May 2016 by Clark 2 Comments

Most of our educational approaches – K12, Higher Ed, and organizational – are fundamentally wrong.  What I see in schools, classrooms, and corporations are information presentation and knowledge testing.  Which isn’t bad in and of itself, except that it won’t lead to new abilities to  do!  And this bothers me.

As a consequence, I took a stand trying to create a curricula that wasn’t about content, but instead about action.  I elaborated it in some subsequent posts, trying to make clear that the activities could be connected and social, so that you could be developing something over time, and also that the output of the activity produced products – both the work and thoughts  on the work – that serve as a portfolio.

I just was reading and saw some lovely synergistic thoughts that inspire me that there’s hope. For one, Paul Tough apparently wrote a book on the non-cognitive aspects of successful learners,  How Children Succeed, and then followed it up with  Helping Children Succeed, which digs into the ignored ‘how’.  His point is that elements like ‘grit’ that have been (rightly) touted aren’t developed in the same way cognitive skills are, and yet they can be developed. I haven’t read his book (yet), but in exploring an interview with him, I found out about Expeditionary Learning.

And what Expeditionary Learning has, I’m happy to discover, is an approach based upon deeply immersive projects that integrate curricula and require the learning traits recognized as important.  Tough’s point is that the environment matters, and here are schools that are restructured to be learning environments with learning cultures.  They’re social, facilitated, with meaningful goals, and real challenges. This is about learning, not testing.  “A teacher’s primary task is to help students overcome their fears and discover they can do more than they think they can.”

And I similarly came across an article  by Benjamin Riley, who’s been pilloried as the poster-child against personalization.  And he embraces that from a particular stance, that learning should be personalized by teachers, not technology.  He goes further, talking about having teachers understand learning science, becoming learning engineers.  He also emphasizes social aspects.

Both of these approaches indicate a shift from content regurgitation to meaningful social action, in ways that reflect what’s known about how we think, work, and learn.  It’s way past time, but it doesn’t mean we shouldn’t keep striving to do better. I’ll argue that in higher ed and in organizations, we should also become more aware of learning science, and on meaningful activity.  I encourage you to read the short interview and article, and think about where you see leverage to improve learning.  I’m happy to help!

Learning in Context

4 May 2016 by Clark 1 Comment

In a recent guest post, I wrote about the importance of context in learning. And for a featured session at the upcoming FocusOn Learning event, I’ll be talking about performance support in context.  But there was a recent question about how you’d do it in a particular environment, and that got me thinking about the the necessary requirements.

As context (ahem), there are already context-sensitive systems. I helped lead the design of one where a complex device was instrumented and consequently there were many indicators about the current status of the device. This trend is increasing.  And there are tools to build context-sensitive helps systems around enterprise software, whether purchased or home-grown. And there are also context-sensitive systems that track your location on mobile and allow you to use that to trigger a variety of actions.

Now, to be clear, these are already in use for performance support, but how do we take advantage of them for learning. Moreover, can we go beyond ‘location’ specific learning?  I think we can, if we rethink.

So first, we  obviously  can use those same systems to deliver specific learning. We can have a rich model of learning around a system, so a detailed competency map, and then with a rich profile of the learner we can know what they know and don’t, and  then when they’re at a point where there’s a gap between their knowledge and the desired, we can trigger some additional information. It’s in context, at a ‘teachable moment’, so it doesn’t necessarily have to be assessed.

This would be on top of performance support, typically, as they’re still learning so we don’t want to risk a mistake. Or we could have a little chance to try it out and get it wrong that  doesn’t actually get executed, and then give them feedback and the right answer to perform.  We’d have to be clear, however, about why learning is needed in  addition to the right answer: is this something that  really needs to be learned?

I want to go a wee bit further, though; can we build it around what the learner is doing?  How could we know?  Besides increasingly complex sensor logic, we can use  when they are.  What’s on their calendar?  If it’s tagged appropriately, we can know at least what they’re  supposed to be doing.  And we can develop not only specific system skills, but more general business skills: negotiation, running meetings, problem-solving/trouble-shooting, design, and more.

The point is that our learners are in contexts all the time.  Rather than take them away to learn, can we develop learning that wraps around what they’re doing? Increasingly we can, and in richer and richer ways. We can tap into the situational motivation to accomplish the task in the moment, and the existing parameters, to make ordinary tasks into learning opportunities. And that more ubiquitous, continuous development is more naturally matched to how we learn.

Learning in context

26 April 2016 by Clark 3 Comments

In preparation for the upcoming FocusOn Learning Conference, where I’ll be running a workshop  about cognitive science for L&D, not just for learning but also for mobile and performance support, I was thinking about how  context can be leveraged to provide more optimal learning  and performance.  Naturally, I had to diagram it, so let me talk through it, and you let me know what you think.

ApartLearningWhat we tend to do, as a default, is to take people away from work, provide the learning resources away from the context, then create a context to practice in. There are coaching resources, but not necessarily the performance resources.  (And I’m not even mentioning the typical lack of sufficient practice.) And this makes sense  when the consequences of making a mistake on the task are irreversible and costly.  E.g. medicine, transportation.  But that’s not as often as we think. And there’s an alternative.

We can wrap the learning around the context.  Our individual is  in the world, and performing the  task. There can  be coaching (particularly at the start, and then gradually removed as the individual  moves to acceptable competence). There are also performance resources – job aids, checklists, etc – in the environment. There also  can be learning resources, so the individual can continue to self-develop, particularly in the increasingly likely situation that the task has some ambiguity or novelty in it. Of course, that only works if we have a learner  capable of self learning (hint hint).

The problems with always taking people away from their jobs are multiple:

  • it is costly to interrupt their performance
  • it can be costly to create the artificial context
  • the learning has a lower likelihood to make it back to the workplace

Our brains don’t learn in an event model, they learn in little bits over time. It’s more natural,  more  effective, to dribble the learning out at the moment of need, the learnable moment.  We have the capability, now, to  be more aware of the learner, to deliver support in the moment, and develop learners over time. The way their brains actually learn.  And we should be doing this.  It’s more effective as well as more efficient.  It requires moving out of our comfort zone; we know the classroom, we know training.  However, we now also know that the effectiveness of classroom training can be very limited.

We have the ability to start making learning effective as well as efficient. Shouldn’t we do so?

Deeper Learning Reading List

20 April 2016 by Clark 3 Comments

So, for my last post, I had the Revolution Reading List, and it occurred to me that I’ve been reading a bit about deeper learning design, too, so I thought I’d offer some pointers here too.

The starting point would be Julie Dirksen’s Design For How People Learn (already in it’s 2nd edition). It’s a very good interpretation of learning research applied to design, and very readable.

A new book that’s very good is Make It Stick, by Peter Brown, Henry Roediger III, and Mark McDaniel, the former being a writer who’s worked with two scientists to take learning research into 10 principles.

And let me mention two Ruth Clark books. One with Dick Mayer from UCSB, e-Learning and the Science of Instruction, that focuses on the use of media.  A second with Frank Nguyen and the wise John Sweller, Efficiency in Learning, focuses on cognitive load (which has many implications, including some overlap with the first).

Patti Schank has come out with a concise compilation of research called The Science of Learning that’s available to ATD members. Short and focused with her usual rigor.  If you’re not an ATD member, you can read her  blog posts that contributed (click ‘View All’).

Dorian Peters book on Interface Design for Learning also has some good learning principles as well as interface design guidance.  It’s not the same for learning as for doing.

Of course, a classic is a compilation of research by a blue-ribbon team lead by John Bransford,  How People Learn, (online or downloadable).  Voluminous, but pretty much state of the art.

Another classic is  the Cognitive Apprenticeship  model of Allen Collins & John Seely Brown. A holistic model abstracted across some seminal work, and quite readable.

The Science of Learning Center has an academic integration of research to instruction theory by Ken Koedinger, et al,  The Knowledge-Learning-Instruction Framework, that’s freely available as a PDF.

I’d be remiss if I don’t point out the Serious eLearning Manifesto, which has 22 research principles underneath the 8 values that differentiate serious elearning from typical versions.  If you buy in, please sign on!

And, of course, I can point you to my own series for Learnnovators on Deeper ID.

So there you go with some good material to get you going. We need to do better at elearning, treating it with the importance it deserves.  These don’t necessarily tell you how to redevelop your learning design processes, but you know who can help you with that.  What’s on your list?

A complex look at task assignments

6 April 2016 by Clark Leave a Comment

I was thinking (one morning at 4AM, when I was wishing I was asleep) about designing assignment structures that matched my activity-based learning model.  And a model emerged that I managed to recall when I finally did get up.  I’ve been workshopping it a bit since, tuning some details. No claim that it’s there yet, by the way.

ModelAssignmentAnd I’ll be the first to acknowledge that it’s complex, as the diagram represents, but let me tease it apart for you and see if it makes sense. I’m trying to integrate meaningful tasks, meta-learning, and collaboration.  And there are remaining issues, but let’s get to the model first.

So, it starts by assigning the learners a task to create an artefact. (Spelling intended to convey that it’s not a typical artifact, but instead a created object for learning purposes.) It could be a presentation, a video, a document, or what have you.  The learner is also supposed to annotate their rationale for the resulting design as well.  And, at least initially, there’s a guide to principles for creating an artefact of this type.   There could even be a model presentation.

The instructor then reviews these outputs, and assigns the student several others to review.  Here it’s represented as 2 others, but it could be 4. The point is that the group size is the constraining factor.

And, again at least initially, there’s a rubric for evaluating the artefacts to support the learner. There could even be a video of a model evaluation. The learner writes reviews of the two artefacts, and annotates the underlying thinking that accompanies and emerges.  And the instructor reviews the reviews, and provides feedback.

Then, the learner joins with other learners to create a joint output, intended to be better than each individual submission.  Initially, at least, the learners will likely be grouped with others that are similar.  This step might seem counter intuitive, but while ultimately the assignments will be to widely different artefacts, initially the assignment is lighter to allow time to come to grips with the actual process of collaborating (again with a guide, at least initially). Finally, the final artefacts are evaluated, perhaps even shared with all.

Several points to make about this.  As indicated, the support is gradually faded. While another task might use another artefact, so the guides and rubrics will change, the working together guide can gradually first get to higher and higher levels (e.g. starting with “everyone contributes to the plan”, and ultimately getting to “look to ensure that all are being heard”) and gradually being removed. And the assignment to different groups goes from alike to as widely disparate as possible. And the tasks should eventually get back to the same type of artefact, developing those 21 C skills about different representations and ways of working.  The model is designed more for a long-term learning experience than a one-off event model (which we should be avoiding anyways).

The artefacts and the notes are evidence for the instructor to look at the learner’s understanding and find a basis to understand not only their domain knowledge (and gaps), but also their understanding of the 21st Century Skills  (e.g. the artefact-creation process, and working and researching and…), and their learning-to-learn skills. Moreover, if collaborative tools are used for the co-generation of the final artefact, there are traces of the contribution of each learner to serve as further evidence.

Of course, this could continue. If it’s a complex artefact (such as a product design, not just a presentation), there could be several revisions.  This is just a core structure.  And note that this is  not for every assignment. This is a major project around or in conjunction with other, smaller, things like formative assessment of component skills and presentation of models may occur.

What emerges is that the learners are learning about the meta-cognitive aspects of artefact design, through the guides. They are also meta-learning in their reflections (which may also be scaffolded). And, of course, the overall approach is designed to get the valuable cognitive processing necessary to learning.

There are some unresolved issues here.  For one, it could appear to be heavy load on the instructor. It’s essentially impossible to auto-mark the artefacts, though the peer review could remove some of the load, requiring only oversight. For another, it’s hard to fit into a particular time-frame. So, for instance, this could take more than a week if you give a few days for each section.  Finally, there’s the issue of assessing individual understanding.

I think this represents an integration of a wide spread of desirable features in a learning experience. It’s a model to shoot for, though it’s likely that not all elements will initially be integrated. And, as yet, there’s no LMS that’s going to track the artefact creation across courses and support all aspects of this.  It’s a first draft, and I welcome feedback!

 

Activity-Based Learning

23 March 2016 by Clark 2 Comments

On a recent conversation with some Up to All of Us colleagues, I was reminded about my ‘reimagining learning‘ model. The conversation was about fractals and learning, and how most tools  (e.g. the LMS)  don’t reflect the conversational nature of learning.  And I was thinking again about how we need to shift our thinking, and how we can reframe it.

I’d pointed one colleague to Diana Laurillard’s model of  conversational  learning, as it does reflect a more iterative model of learning with ongoing cycle of action and reflection. And it occurred to me that I hadn’t conveyed what the learner’s experience with the activity curriculum would look like. It’s implicit, but not explicit.

New Learning CycleOf course, it’s a series of activities (as opposed to a series of content), but it’s about the product of those activities.  The learner (alone or together) creates a response to a challenge, perhaps accessing relevant content as part of the process, and additionally annotates the thinking behind it.

This is then viewed by peers and/or a mentor, who provide feedback to the learner. As a nuance, there should be guidance for that feedback, so that it explicitly represent the concept(s) that should guide the performance. The subsequent activity could be to revise the product, or move along to something else.

The point being that the learner is engaged in a meaningful assignment (the activity should be contextualized), and actively reflecting. The subsequent activity, as the Laurillard model suggests, should reflect what the learner’s actions have demonstrated.

It’s very much the social cognition benefits I’ve talked about before, in creating and then getting feedback on that representation.  The learner’s creating and reflecting, and that provides a rich basis for understanding where they are at.

Again, my purpose here is to help make it clear that a curriculum properly should be  about  doing, not knowing.  And this is why I believe that there must   be people in the loop. And while much of that burden might be placed on the other learners (if you have a synchronous cohort model), or even the learner with guidance on generating their own feedback, with rubrics for evaluation, but you still benefit from  oversight in case the understanding  gets off track.

We can do a lot to improve asynchronous learning, but we should not neglect social when we can take advantage of it. So, are you wanting to improve your learning?

Aligning with us

22 March 2016 by Clark Leave a Comment

One of the realizations I had in writing the Revolutionize L&D book was how badly we’re out of synch with our brains. I think alignment is a big thing, both from the Coherent Organization perspective of having our flows of information aligned, and in processes that help us move forward  in ways that reflect  our humanity.

In short, I believe we’re out of alignment with our views on how we think, work, and learn.  The old folklore that represents the thinking that still permeates L&D today is based upon outdated models. And we really have to understand these differences if we’re to get better.

AligningThe mistaken belief about thinking is that it’s all done in our head. That is, we keep the knowledge up there, and then when a context comes in we internalize it and make a  logical  decision and then we act.  And what cognitive science says is that this isn’t really the way it works.  First, our thinking isn’t all in our heads. We distribute it across representational tools like spreadsheets,  documents, and (yes) diagrams.  And we don’t make logical decisions without a lot of support or expertise. Instead, we make quick decisions.  This means that we should be looking at tools to support thinking, not just trying to put it all in the head. We should be putting as much in the world as we can, and look to scaffold our processes as well.

It’s also this notion that we go away and come up with the answer, and that the individual productivity is what matters.  It turns out that most innovation, problem-solving, etc, gets better results if we do it together.  As I often say “the room is smarter than the smartest person in the room  if you manage the process right“.  Yet, we don’t.  And people work better when they understand why what  they’re doing is  important and they care about it. We should be looking at ways to get people to work together more and better, but instead we still see hierarchical decision making, restrictive  cultures, and more.

And, of course, there still persists this model that information dump and knowledge test will lead to new capabilities.  That’s a low probability approach. Whereas if you’re serious about learning, you know it’s mostly about spacing contextualized application of that knowledge to solve problems. Instead, we see rapid elearning tools and templates that tart-up quiz questions.

The point being, we aren’t recognizing that which makes us special, and augmenting in ways that bring out the best.  We’re really running organizations that aren’t designed for humans.  Most of the robotic work should and will  get automated, so then when we need to find ways to use people to do the things they’re best at. It should be the learning folks, and if they’re not ready, well, they better figure it out or be left out!  So let’s get a jump on it, shall we?

Context Rules

15 March 2016 by Clark Leave a Comment

I was watching a blab  (a video chat tool) about the upcoming FocusOn Learning, a new event from the eLearning Guild. This conference combines their previous mLearnCon and Performance Support Symposium with the addition of  video.  The previous events have been great, and I’ll of course be there (offering a workshop on cognition for mobile, a mobile learning 101 session, and one on the topic of this post). Listening to folks talk about the conference led me to ponder the connection, and something struck me.

I find it kind of misleading that it’s FocusOn  Learning, given that performance support, mobile, and even video typically is  more about acting in the moment than developing over time.  Mobile device use tends to be more about quick access than extended experience.  Performance support is more about augmenting our cognitive capabilities. Video (as opposed to animation or images or graphics, and similar to photos) is about showing how things happen  in situ (I note that this is my distinction, and they may well include animation in their definition of video,  caveat emptor).  The unifying element to me is context.

So, mobile is a platform.  It’s a computational medium, and as such is the same sort of computational  augment that a desktop  is.  Except that it can be with you. Moreover, it can have sensors, so not just providing computational capabilities where you are, but  because of when and where you are.

Performance support is about providing a cognitive augment. It can be any medium – paper, audio, digital – but it’s about providing support for the gaps in our mental capabilities.  Our architecture is powerful, but has limitations, and we can provide support to minimize those problems. It’s about support  in the moment, that is, in context.

And video, like photos, inherently captures context.  Unlike an animation that represents conceptual distinctions separated from the real world along one or more dimensions, a video accurately captures what the camera sees happening.  It’s again about context.

And the interesting thing to me is that we can support performance in the moment, whether a lookup table or a howto video, without learning necessarily happening. And that’s  OK!  It’s also possible to use context to support learning, and in fact we can provide least material to augment a context than create an artificial context which so much of learning requires.

What excited me was that there was a discussion about AR and AI. And these, to me, are also about context.  Augmented Reality layers  information on top  of your current context.  And the way you start doing contextually relevant content delivery is with rules tied to content descriptors (content systems), and such rules are really part of an intelligently adaptive system.

So I’m inclined to think this conference is about  leveraging context in intelligent ways. Or that it can be, will be, and should be. Your mileage may vary ;).

Mindmapping

3 March 2016 by Clark 11 Comments

So, if you haven’t figured it out yet, I do mindmaps.  As I’ve recited before, I started doing it as a way to occupy my brain enough so I could listen to keynotes, but occasionally I use it to other purposes, such as representing structure or even planning. And thru  my esteemed colleague Jane Hart  (who’s Modern Workplace Learning book I’m going through and thoroughly impressed), I’m giving a mindmapping webinar today for a group of several universities in Ireland.  I thought I’d share what I’m presenting.

MindmappingMindmaps are a visual way of representing knowledge.  You use links to show connections between concepts (represented as nodes), developing a structural relationship.  A true semantic network would have those links labeled, as there are many different types of relationships (causal, precedence, hierarchical), but mindmaps typically have unlabeled links.  Still, mindmaps capture structural information in a visual way, that supports tapping into our powerful visual processing system. (This is the one I created for them to advertise the talk, it’s neither the order I ended up for them or am using here. ;)

You can add information to them; as a visual tool, you can add extra graphical information, like tables or charts, to augment the map.  You can similarly add color as a way to layer additional semantic information such as similarity. And the links can be plain or directional.  Importantly, while a mindmap can be essentially equivalent to an outline  if you maintain a strict tree structure, you can create a graph by having more complex links that generate loops.

The process of mindmapping is fairly straightforward: you have a central node, and then generate additional nodes and link them. I tend to go counter-clockwise, and include an arrow indicating that, because I’m capturing a linear presentation, but generating a static representation of information doesn’t have any directional requirement. I find that I have to frequently rearrange to fit the mindmap appropriately to the image, but that’s part of the benefit.

The evidence appears to show that mindmapping is superior to note-taking. I don’t do it all the time, but there are reasons to think you should.  The reasons, I believe, that it  is better is that you’re not just transcribing a presentation, but you’re actively parsing it to represent the structure. If you do take notes, you should be paraphrasing what you hear in your own words, to have active processing of the information. The additional effort to extract the structure as well is a form of valuable cognitive processing that elaborates the information.  Doing both, paraphrasing and extracting structure, would be a great way to really comprehend what you’re hearing.

As suggested, it’s helpful to mindmap talks, but it can also be a thinking tool, to analyze situations and sort out your thoughts or plan activities and add elements as you think of them. No real advantage over an outline, potentially (though the ability to add other graphics and to make non-strict maps may counter that), though I suspect some find the drawing and rearranging to be a nice physical overhead to facilitate reflecting.  And, of course, it can be an evaluation tool, asking someone to create their maps to see their understanding.

While there are dedicated tools for mindmapping, both applications and in the cloud, which will make creating and rearranging easier (I presume), you can use almost any drawing package (I use OmniGraffle). You could use Powerpoint or Keynote, and even pencil and paper (if it’s just for the processing) though it can be harder to revise.

So, that’s my riff on mind mapping.  I welcome your thoughts.

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.