Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Writing and the 4C’s of Mobile

8 February 2010 by Clark 1 Comment

As I’ve mentioned before, I’m writing a book on mobile learning.   My only previous experience was writing Engaging Learning, where the prose practically exploded from my fingers. This time is different.

The prose actually does flow quite easily from my fingers,   but I find myself restructuring more often than last time.   This is a bigger topic, and I keep uncovering new ways to think about mobile and new facets to try to include.   As a consequence, as the deadline nears (!), I find myself more and more compelled to put all free time into the text.

There’s a consequence, and that is a decreasing frequency of blogging.   I’m coming up with some great ideas, but I’ve got to get them into the book, and I’m not finding time to rewrite them.

When I do have ideas in other areas (and I always do), I’m finding that they disappear under the pressure to meet my deadline. And there are ancillary details still to be taken care of (photos of devices, coordinating a few case studies).

Further, as neither blogging or the book (directly) pay the bills, I’ve still got to meet my client needs.   Also, I’m speaking at the Learning Solutions conference and involved in various ways with several others, and some deliverables are due soon. I’m feeling a tad stretched!

So, in many ways, this is an apology for the lack of blog posts, and the fact that it will likely to be sparse for another month and some.

As a brief recompense, I did want to communicate one framework that I’m finding helpful.   I’ll confess that it’s very similar to Low and O’Connell’s 4 R’s (for which I can’t find a link!?!; from my notes: Record, Recall, Reinterpret , Relate), but I can never remember them, which means they need a new alliteration.   Mine’s a bit simpler:

  • Content: the provision of media (e.g. documents, audio, video, etc) to the learner/performer
  • Compute: taking in data from the learner and processing it
  • Communicate: connecting learners/performers with others
  • Capture: taking in data from sensors including camera, GPS, etc, and saving for sharing or reflection

I find this one of several frameworks that support ‘thinking different’ about mobile capabilities.   I’ll be interested to hear your thoughts.

Is it all problem-solving?

12 January 2010 by Clark 5 Comments

I’ve been arguing for a while that we need to take a broader picture of learning, that the responsibility of learning units in the organization should be ensuring adequate infrastructure, skills, and culture for innovation, creativity, design, research, collaboration, etc, not just formal learning. As I look at those different components, however, I wonder if there’s an overarching, integrating viewpoint.

When people go looking for information, or colleagues, they have a problem to solve. It may be a known one with an effective solution, or it may be new. It doesn’t matter whether it’s a new service to create, a new product to design, a customer service problem, an existing bug, or what. It’s all really a situation where we need an answer and we don’t have one.

We’ll have some constraints, some information, but we’re going to have to research, hypothesize, experiment, etc. If it’s rote, we ought to have it automated, or we ought to have the solution in a performance support manner. Yes, there are times training is part of the solution. But this very much means that first, all our formal solutions (courses, job aids, etc) should be organized around problem-solving (which is another way of saying that we need the objectives to be organized around doing).

Once we go beyond that, it seems to me that there’s a plausible case to be made that all our informal learning also needs to be organized from a problem-solving perspective. What does that mean?

One of the things I know about problem-solving is that our thought processes are susceptible to certain traps that are an outcome of our cognitive architecture. Functional fixedness and set-effects are just two of the traps. Various techniques have evolved to overcome these, including problem re-representation, systematicity around brain-storming, support for thinking laterally, and more.

Should we be baking this into the infrastructure? We can’t neglect skills. Assuming that individuals are effective problem-solvers is a mistake. The benefits of instruction in problem-solving skills have been demonstrated. Are we teaching folks how to find and use data, how to design useful experiments and test solutions? Do folks know what sort of resources would be useful? Do they know how to ask for help, manage a problem-solving process, and deal with organizational issues as well as conceptual ones?

Finally, if you don’t have a culture that supports problem-solving, it’s unlikely to happen. You need an environment that tolerates experimentation (and associated failure), that support sharing and reflection, that rewards diverse participation and individual initiative, you’re not going to get the type of pro-active solutions you want.

This is still embryonic, but I’m inclined to believe that there are some benefits from pushing this approach a bit. What say you?

Creating meaningful experiences

8 December 2009 by Clark 7 Comments

What if the learner’s experience was ‘hard fun’: challenging, but engaging, yielding a desirable experience, not just an event to be tolerated, OR what is learning experience design?

Can you imagine creating a ‘course’ that wins raving fans?   It’s about designing learning that is not only effective but seriously engaging.   I believe that this is not only doable, but doable under real world constraints.

Let me start with this bit of the wikipedia definition of experience design:

the practice of designing…with a focus placed on the quality of the user experience…, with less emphasis placed on increasing and improving functionality

That is, experience design is about creating a user experience, not just focusing on their goals, but thinking about the process as well.     And that’s, to me, what is largely ignored in creating elearning is thinking about process from the learner’s perspective. There are really two components: what we need to accomplish, and what we’d like the learner to experience.

Our first goal still has to look at the learning need, and identify an objective that we’d like learners to meet, but even that we need to rethink.   We may have constraints on delivery environment, resources, and more that we have to address as well, but that’s not the barrier.   The barrier is the mistake of focusing on knowledge-level objectives, not on meaningful skill change.   Let me be very clear: one of the real components of creating a learning experience is ensuring that we develop, and communicate, a learning objective that the learner will ‘get’ is important and meaningful to them.   And we have to take on the responsibility for making that happen.

Then, we need to design an experience that accomplishes that goal, but in a way that yields a worthwhile experience.   I’ve talked before about the emotional trajectory we might want the learner to go through.   It should start with a (potentially wry) recognition that this is needed, some initial anxiety but a cautious optimism, etc.   We want the learner to gradually develop confidence in their ability, and even some excitement about the experience and the outcome.   We’d like them to leave with no anxiety about the learning, and a sense of accomplishment.   There are a lot of components I’ve talked about along the way, but at core it’s about addressing motivation, expectations, and concerns.

Actually, we might even shoot for more: a transformative experience, where the learner leaves with an awareness of a fundamental shift in their understanding of the world, with new perspectives and attitudes to accompany their changed vocabulary and capabilities.   People look for those in many ways in their life; we should deliver.

This does not come from applying traditional instructional design to an interview with a SME (or even a Subject Matter Network, as I’m increasingly hearing and inclined to agree).   As I defined it before, learning design is the intersection of learning, information, and experience design.   It takes a broad awareness of how we learn, incorporating viewpoints behavior, cognitive, constructive, connective, and more.   It takes an awareness of how we experience: media effects on cognition and emotion, and of the dramatic arts.   And most of all, it takes creativity and vision.

However, that does not mean it can’t be developed reliably and repeatably, on a pragmatic basis.     It just means you have to approach it anew.   It take expertise, and a team with the requisite complementary skill sets, and organizational support. And commitment.   What will work will depend on the context and goals (best principles, not best practices), but I will suggest that with good content development processes, a sound design approach, and a will to achieve more than the ordinary.   This is doable on a scalable basis, but we have to be willing to take the necessary steps.   Are you ready to take your learning to the next level, and create experiences?

The Augmented Performer

2 December 2009 by Clark 4 Comments

The post I did yesterday on Distributed Cognition also triggered another thought, about the augmented learner.   The cited post talked about how design doesn’t recognize the augmented performer, and this is a point I’ve made elsewhere, but I wanted to capture it in a richer representation.   Naturally, I made a diagram:

DistributedCognitionIf we look at our human capabilities, we’re very good pattern matchers, but pretty bad at exercising rote performance.   So we can identify problems, and strategize about solutions, but when it comes to executing rote tasks, like calculation, we’re slow and error prone.   From the point of the view of a problem we’re trying to solve, we’re not as effective as we could be.

However, when we augment our intellect, say with a networked device (read: mobile), we’re augmenting our problem-solving and executive capability with some really powerful calculations capability, and also some sensors we’re typically not equipped with (e.g. GPS, compass), as well as access to a ridiculously huge amount of potential information through the internet, as well as our colleagues.   From the point of view of the problem, we’re suddenly a much more awesome opponent.

And that is the real power of technology: wherever and whenever we are, and whatever we’re trying to do, there’s an app for that.   Or could be.   Are you empowering your performers to be awesome problem-solvers?

Distributed Thinking & Learning

1 December 2009 by Clark 2 Comments

A post I was pointed to reviews a chapter distributed thinking, a topic I like from my days getting to work with Ed Hutchins and his work on Distributed Cognition.   It’s a topic I spoke about at DevLearn, and recently wrote about.   The chapter is by David Perkins, one of premier thinkers on thinking, and I like several things he says.

For one, he says: “typical psychological and educational practices treat the person in a way that is much closer to person-solo”.   I think that’s spot-on, we don’t tend to train for, and design for, the augmented human, and yet we know from situated cognition and distributed cognition that much of the problem solving we do is augmented in many ways, from pencil and paper, to calculators, references, and mobile devices.

I also like his separation of task solving from executive function, where executive function is the searching, sequencing, etc of the underlying domain-specific tasks, and how he notes that just because you create an environment that requires executive functioning, it doesn’t mean the learner will be able to develop those skills.   “In general, cognitive opportunities are not in themselves cognitive scaffolds.”   So treat all those so-called ‘edutainment’ games that claim to develop problem-solving skills with great care; they may require it, but there’s little effort I’ve seen that they actually develop it.

The implication is that having kids solve problems with executive support, but without scaffolding that executive support and the gradual release of those executive skills to the learner, we’re not really developing appropriate problem-solving skills.   We don’t talk explicitly about them, and consequently leave the acquisition of those skills to chance.   If we don’t put 21st century skills into our courses, K12, higher ed, and organizational, we’re not really developing our performers.

And that, at the end of the day, is what we need to be doing.   So, start thinking a bit broader, and deeper, about learning and the components thereof, and produce better learning, learners, and ultimately the outcome performance.

Who authorizes the authority?

28 November 2009 by Clark 2 Comments

As a reaction to my eLearnMag editorial on the changing nature of the educational publishing market, Publish or Perish, a colleague said: “There is a tremendous opportunity in the higher ed publishing market for a company that understands what it means to design and deliver engaging, valuable, and authentic customer experiences–from content to services to customer service and training.”

I agree, but it triggered a further thought. When we go beyond delivering content as a component of a learning experience, and start delivering learning experiences, are we moving from publisher to education provider?   And if so, what are the certification processes?

Currently, institutions are accredited by accrediting bodies.   Different bodies accredit different things.   There are special accrediting bodies (a.g. AACSB or ACBSP for business[2?], ABET for applied science).   In some cases, there are just regional accreditation bodies (e.g. WASC).     There’s overlap, in that a computer science school might want to align with ABET, and yet the institution has to be accredited by, say, WASC.

And I think this is good, in that having groups working to oversee specific domains can be responsive to changing demands, and general accreditation to oversee ongoing process.   I recall in the past, this latter was largely about ensuring that there were regular reviews and specific improvement processes, almost an ISO 9001 approach. However, are they really able to keep up?   Are they in touch with new directions?   The recent scandals around business school curricula seem to indicate some flaws.

On the other hand, who needs accreditation?   We still have corporate universities, they don’t seem to need to be accredited except by their organization, though sometimes they partner with institutions to deliver accredited programs. And many people provide coaching services, and workshops.   There are even certificates for workshops which presumably depend on the quality of the presenter, and sometimes some rigor around the process to ensure that there’s feedback going on so that continuing education credits can be earned.

My point is, the standards vary considerably, but when do you cross the line? Presumably, you can’t claim outcomes that aren’t legitimate (“we’ll raise your IQ 30 points” or somesuch), but otherwise, you can sell whatever the market will bear.   And you can arrange to be vetted by an independent body, but that’s problematic from a cost and scale perspective.

Several issues arise from this for me.   Say you wanted to develop some content (e.g. deeper instructional design, if you’re concerned like me about the lack of quality in elearning).   You could just put it out there, and make it available for free, if you’ve the resources.   Otherwise, you could try to attach a pricetag, and see if anyone would pay.   However, what if you really felt it was a definitive suite of content, the equivalent of a Master’s course in Instructional Technology?   You could sell it, but you couldn’t award a degree even if you had the background and expertise to make a strong claim that it’s a more rigorous degree than some of those offered by accredited institutions, and more worthwhile.

The broader question, to me, is what is the ongoing role of accreditation?   I’ve argued that the role of universities, going forward, will likely be to develop learning to learn skills. So, post your higher ed experience (which really should be accomplished K12, but that’s another rant), you should be capable of developing your own skills.   If you’ve developed your own learning abilities, and believe you’ve mastered an area, I guess you really only need to satisfy your current or prospective employer.

On the other hand, an external validation certainly makes it easier to evaluate someone rather than the time-intensive process of evaluation by yourself.   Maybe there’s a market for much more focused evaluations, and associated content?

So, will we see broader diversity of acceptable evaluations, more evaluation of the authorial voice of any particular learning experience, a lifting of the game by educational institutions, or a growing   market of diverse accreditation (“get credit for your life experience” from the Fly By Night School of Chicanery)?

Who are mindmaps for?

13 November 2009 by Clark 9 Comments

In response to my recent mindmap of Andrew McAfee’s conference keynote (one of a number of mindmaps I’ve done), I got this comment:

Does the diagram work as a useful way of encapsulating the talk for someone who was there? Because, speaking as someone who wasn‘t, I find it almost entirely content-free. Just kind of a collection of buzz-phrases in thought bubbles, more or less randomly connected.

I‘m not trying to criticise his talk – which obviously I didn‘t hear – or his points – which I still have no idea about – but the diagram as a method of conveying information is a total failure to this sample size of one. Possibly more useful as a refresher mechanism for people who got the talk in its original form?

Do mindmaps work for readers?   Well, I have to admit one reason I mindmap is completely personal.   I do it to help me process the presentation. Depending on the speaker, I can thoughtfully reprocess the information, or sometimes just take down interesting comments, but there are several benefits: In figuring out the ways to link, I’m capturing the conceptual structure of the talk (really, they’re concept maps), and I’m also occupying my personal bandwidth enough to allow me to focus on the talk without my mind chasing down one path and missing something.   Er, mostly…

Then, for a second category, those who actually heard the talk, they might be worthwhile reflection and re-processing.   I’d welcome anyone weighing in on that. I don’t have access to someone else’s example to see whether it would work for me.

Then, there are the potential viewers, like the commenter, for whom it’s either possible or not to process any coherent idea out of the presentation.   I looked back at the diagram for McAfee’s keynote, and I can see that I was cryptic, missing some keywords in communicating. This was for two reasons: one, he was quick, and it was hard to get it down before he’d moved on.   Two, he was eloquent, and because he was quick I couldn’t find time to paraphrase.   And there’s a more pragmatic reason; I try to constrain the size of the mindmap, and I’m always rearranging to get it to fit on one page.   That effort may keep me more terse than is optimal for unsupported processing.

I will take issue with “more or less randomly connected”, however.   The connections are quite specific.   In all the talks I’ve done this for, there have been several core points that are elaborated, in various ways by talk, but each talk tends to be composed of a replicated structure.   The connections capture that structure.   For instance, McAfee repeatedly took a theme, used an example to highlight it, then derived a takehome point and some corollaries.   There would be ways to more eloquently convey that structure (e.g. labeled links, color coding), but the structure isn’t always laid out beforehand (it’s emergent), and is moving fast enough that I couldn’t do it on the fly.

I could post-process more, but in the most recent two cases I wanted to get it up quickly: when I tweeted I was making the mindmap, others said they were eager to see it, so I hung on for some minutes after the keynotes to get it up quickly.   McAfee himself tweeted “dang, that was FAST – nice work!”

I did put the arrow in the background to guide the order in which the discussion came, as well, but apparently it is too telegraphic for the non-attendee. It happens I know the commenter well, and he’s a very smart guy, so if he’s having trouble, that’s definitely an argument that the raw mindmap alone is not communicative, at least not without perhaps some post-processing to make the points clear.

Really valuable to get the feedback, and worthwhile to reflect on what the tradeoffs are and who benefits. It may be that these are only valuable for fellow attendees.   Or just me. I may have to consider a) not posting, b) slowing down and doing more post-processing, or…?   Comments welcome!

Game-based meta-cognitive coaching

15 October 2009 by Clark 1 Comment

Many years ago, I read of some work being done by Valerie Shute and Jeffrey Bonar that I later got a chance to actually play a (very small) role in (and even later got to work with Valerie, definitely world-class talent).   They had developed three separate tutoring environments (geometric optics, economics, electrical circuits), yet the tutoring engine was essentially the same across all three, not domain specific.   The clever thing they were doing was tutoring on exploration skills, varying one variable at a time, making reasonable increments in values to graph trends, etc.

Subsequent to that, I got involved again in games for learning. What naturally occurred to me was that you could put the same sort of meta-cognitive skill tutoring in a game environment, as you have to digitally create all the elements you’d need to track anyways for the game reasons, and it could be a layer on top.   While this would work in a single game (and we did put a small version into the Quest game), it would be even better on top of a game engine.   I even proposed it as a research project, but the grant reviewers thought that while   a good idea, it was too ambitious (ahead of my time and underestimated :).

The reinterest in so-called 21st century skills, the kind Stephen Downes so eloquently calls an Operating System for the Mind, reawakens the opportunity.   These skills are manifested in activity, and require an understanding of the activity to be able to infer approaches and provide feedback. In a well-defined arena like a designed game environment, we can know the goals and possible actions, and start looking for patterns of behavior.

Game engines, with their fixed primitives, make it easier to define what goals are and consequently to specify the particular goals and makes looking for patterns more generally definable.   Thus, in a game, we can see whether the learners’ exploration is systematic, whether their attempts are as informative as possible, and possibly more.

This is also true of virtual worlds, although only when designed with goals (e.g. from a simulation to a scenario, whether tuned into a game or not).   The benefit of a virtual world is, again, the primitives are fixed, simplifying the task of defining goals and actions.

Of course, building particular types of interaction (e.g. social), particular types of clues (e.g. audio versus visual) and looking for patterns can provide deeper opportunities.   Really, such performance is initially an assessment (one of the facets of what we were doing on the Intellectricity project was building a learner characteristic assessment as a game), and that assessment can trigger intervention as a consequence.   For any malleable skill, we have real opportunities.

Given that much of what is necessary are abilities to research , evaluate the quality of sources, design, experiment, create, and more, these environments are a fascinating opportunity.   I’m not in a situation to lead such an initiative, but I still think it’s a worthwhile undertaking.   Anyone ‘game’?

What’s an ‘A’ for?

12 September 2009 by Clark 2 Comments

I was recently thinking about grades, and was wondering what an ‘A’ means these days.   Then, at my lad’s Back to School night, I was confronted with evidence of the two competing theories that I see. One teacher had a scale on the wall, with (I don’t remember exactly, but something like): 96-100 = A+, 91-95 = A, 86-90 = A-, and so on, down to below 50 = Fail. Now, you get full points on homework just for trying, but it’s clearly a competency model, with absolute standards.   A different teacher recounted how she tells students that if they just do the required work, that it’s only worth a C, and A’s are for above and beyond. That’s a different model. There aren’t strict criteria for that latter.   And I’m very sympathetic to that latter stance, despite that it seems subjective.

The second approach resonates with my experience back in high school, where A’s were handed out for work that really was above and beyond the ordinary.   A deeper understanding.   We seem to have shifted to a model where if you do what’s asked, you get an ‘A’.   And I see benefits of both sides.   Defining performance, and having everyone able to achieve them is ideal.   Yet, intuitively, you recognize that there’s the ability to apply concepts, and then another level where people can flexibly use them to solve novel problems, combine them with other concepts, infer new concepts, etc.

I was pointed to some work by Daniel Schwartz (thanks @mrch0mp3rs) that grounds this intuition in an innovative framework based upon some good research. In a paper with John Bransford and David Sears, they made a intriguing case for two different forms of transfer: efficient and innovative, and argued convincingly that most of our models address the former and not the latter, yet addressing the latter yielded better outcomes on both.   I think the work David Jonassen is doing on teaching problem-solving is developing just this sort of understanding, but it’s on problems that are like real world ones (and yet improves performance on standard measures).   And I like David’s lament that the problems kids solve in schools have no relation to the problems they see in the world (and, implicitly, no worth).

Right now, our competencies aren’t defined well enough to support assessing this extra level.   In an ideal world, we’d have them all mapped out, and you could get A’s in every one you could master.   We don’t live in that world, unfortunately.   So we have two paths.   We live with our lower measures, and everyone gets A’s meeting them (if they try, but that’s a separate issue), and then sort it out after graduation, in the real world, or we allow some interpretation by the teacher and measure not only effort, but a deeper form of understanding.   We’ve steered away from the second approach, probably because of the consequent arguments about favoritism, social stigma, etc.   Yet the former is increasingly meaningless, I fear.

We should bite the bullet and admit that we’re waving our hands. Then we could own up that not all teachers are ready to do the type of teaching David’s doing and Daniel’s advocating, and look to using technology to make available a higher quality of content (like the UC College Prep program has been doing) to provide support.   I’d rather see a man-in-the-moon program around getting a really meaningful curriculum up online than going to Mars at this point, and I’m a big fan of NASA and the pragmatic benefits of space exploration. Just think such a project would have a bigger impact on the world, all told.

In the meantime, we have to live with some grade inflation (gee, I got into a UC with a high school average below 4.0!), bad alignment between what schools do and what kids need as preparation for life in this century, and a very long road towards any meaningful change.   Sigh.

Driving formal & informal from the same place

8 September 2009 by Clark 4 Comments

There’s been such a division between formal and informal; the fight for resources, mindspace, and the ability for people to get their mind around making informal concrete.   However, I’ve been preparing a presentation from another way of looking at it, and I want to suggest that, at core, both are being driven from the same point: how humans learn.

I was looking at the history of society, and it’s getting more and more complex. Organizationally, we started from a village, to a city, and started getting hierarchical.   Businesses are now retreating from that point of view, and trying to get flatter, and more networked.

Organizational learning, however, seems to have done almost the opposite. From networks of apprenticeship through most of history, through the dialectical approach of the Greeks that started imposing a hierarchy, to classrooms which really treat each person as an independent node, the same, and autonomous with no connections.

Certainly, we’re trying to improve our pedagogy (to more of an andragogy), by looking at how people really learn.   In natural settings, we learn by being engaged in meaningful tasks, where there’re resources to assist us, and others to help us learn. We’re developed in communities of practice, with our learning distributed across time and across resources.

That’s what we’re trying to support through informal approaches to learning. We’re going beyond just making people ready for what we can anticipate, and supporting them in working together to go beyond what’s known, and be able to problem-solve, to innovate, to create new products, services, and solutions.   We provide resources, and communication channels, and meaning representation tools.

And that’s what we should be shooting for in our formal learning, too. Not an artificial event, but presented with meaningful activity, that learners get as important, with resources to support, and ideally, collaboration to help disambiguate and co-create understanding.   The task may be artificial, the resources structured for success, but there’s much less gap between what they do for learning and what they do in practice.

In both cases, the learning is facilitated. Don’t assume self-learning skills, but support both task-oriented behaviors, and the development of self-monitoring, self learning.

The goal is to remove the artificial divide between formal and informal, and recognize the continuum of developing skills from foundational abilities into new areas, developing learners from novices to experts in both domains, and in learning..

This is the perspective that drives the vision of moving the learning organization role from ‘training’ to learning facilitator. Across all organizational knowledge activities, you may still design and develop, but you nurture as much, or more.   So, nurture your understanding, and your learners.   The outcome should be better learning for all.

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.