Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Reimagining Learning

8 March 2012 by Clark 20 Comments

On the way to the recent Up To All Of Us unconference  (#utaou), I hadn’t planned a personal agenda.  However, I was going through the diagrams that I’d created on my iPad, and discovered one that I’d frankly forgotten. Which was nice, because it allowed me to review it with fresh eyes, and it resonated.  And I decided to put it out at the event to get feedback.  Let me talk you through it, because I welcome your feedback too.

Up front, let me state at least part of the motivation.  I’m trying to capture rethinking about education or formal learning. I’m tired of anything that allows folks to think knowledge dump and test is going to lead to meaningful change.  I’m also trying to ‘think out loud’ for myself.   And start getting more concrete about learning experience design.

Let me start with the second row from the top.  I want to start thinking about a learning experience as a series of activities, not a progression of content.  These can be a rich suite of things: engagement with a simulation, a group project, a museum visit, an interview, anything you might choose for an individual to engage in to further their learning. And, yes, it can  include traditional things: e.g. read this chapter.

This, by the way, has a direct relation to Project Tin Can, a proposal to supersede SCORM, allowing a greater variety of activities: Actor – Verb – Object, or I – did – this.  (For all I can recall, the origin of the diagram may have been an attempt to place Tin Can in a broad context!)

Around these activities, there are a couple of things. For one, content is accessed on the basis of the activities, not the other way around. Also, the activities produce products, and also reflections.

For the activities to be maximally valuable, they should produce output.  A sim use could produce a track of the learner’s exploration. A group project could provide a documented solution, or a concept-expression video or performance. An interview could produce an audio recording.  These products are portfolio items, going forward, and assessable items.  The assessment could be self, peer, or mentor.

However, in the context of ‘make your thinking visible’ (aka ‘show your  work’), there should also be reflections or cognitive annotations.  The underlying thinking needs to be visible for inspection. This is also part of your portfolio, and assessable. This is where, however, the opportunity to really recognize where the learner is, or is not, getting the content, and detect opportunities for assistance.

The learner is driven to content resources (audios, videos, documents, etc) by meaningful activity.  This in opposition to the notion that content dump happens before meaningful action. However, prior activities can ensure that learners are prepared to engage in the new activities.

The content could be pre-chosen, or the learners could be scaffolded in choosing appropriate materials. The latter is an opportunity for meta-learning.  Similarly, the choice of product could be determined, or up to learner/group choice, and again an opportunity for learning cross-project skills.  Helping learners create useful reflections is valuable (I recall guiding honours students to take credit for  the work they’d done; they were blind to much of the own hard work they had put in!).

When I presented this to the groups, there were several questions asked via post-its on the picture I hand-drew. Let me address them here:

What scale are you thinking about?

This unpacks. What goes into activity design is a whole separate area. And learning experience design may well play a role beneath this level.  However, the granularity of the activities is at issue.  I think about this at several scales, from an individual lesson plan to a full curriculum.    The choice of evaluation should be competency-based, assessed by rubrics, even jointly designed ones.  There is a lot of depth that is linked to this.

How does this differ from a traditional performance-based learning model?

I hadn’t heard of performance-based learning. Looking it up, there seems considerable overlap.  Also with outcome-based learning,  problem-based learning, or service learning, and similarly Understanding By Design.  It may not be more, I haven’t yet done the side-by-side. It’s scaling it up , and arguably a different lens, and maybe more, or not.  Still, I’m trying to carry it to more places, and help provide ways to think anew about instruction and formal education.

An interesting aside, for me, is that this does  segue to informal learning. That is, you, as an adult, choose certain activities to continue to develop your ability in certain areas.  Taking this framework provides a reference for learners to take control of their own learning, and develop their ability to be better learners.  Or so I would think, if done right.  Imagine the right side of the diagram moving from mentor to learner control.

How much is algorithmic?

That really depends.  Let me answer that in conjunction with this other comment:

Make a convert of this type of process out of a non-tech traditional process and tell that story…  

I can’t do that now, but one of the attendees suggested this sounded a lot like what she did in traditional design education. The point is that this framework is independent of technology.  You could be assigning studio and classroom and community projects, and getting back write-ups, performances, and more.  No digital tech involved.

There are definite ways in which technology can assist: providing tools for content search, and product and reflection generation, but this is not  about technology. You could be algorithmic in choosing from a suite of activities by a set of rules governing recommendations based upon learner performance, content available, etc.  You could also be algorithmic in programming some feedback around tech-traversal.  But that’s definitely not where I’m going right now.

Similarly, I’m going to answer two other questions together:

 How can I look at the path others take? and How can I see how I am doing?

The portfolio is really the answer.  You should be getting feedback on your products, and seeing others’ feedback (within limits).  This is definitely not intended to be individual, but instead hopefully it could be in a group, or at least some of the activities would be (e.g. communing on blog posts, participating in a discussion forum, etc).  In a tech-mediated environment, you could see others’ (anonymized) paths, access your feedback, and see traces of other’s trajectories.

The real question is: is this formulation useful? Does it give you a new and useful way of thinking about designing learning, and supporting learning?

MOOC reflections

29 February 2012 by Clark 18 Comments

A recent phenomena is the MOOC, Massively Open Online Courses. I see two major manifestations: the type I have participated in briefly (mea culpa) as run by George Siemens, Stephen Downes, and co-conspirators, and the type being run by places like Stanford. Each share running large numbers of students, and laudable goals. Each also has flaws, in my mind, which illustrate some issues about education.

The Stanford model, as I understand it (and I haven’t taken one), features a rigorous curriculum of content and assessments, in technical fields like AI and programming. The goal is to ensure a high quality learning experience to anyone with sufficient technical ability and access to the Internet. Currently, the experience does support a discussion board, but otherwise the experience is, effectively, solo.

The connectivist MOOCs, on the other hand, are highly social. The learning comes from content presented by a lecturer, and then dialog via social media, where the contributions of the participants are shared. Assessment comes from participation and reflection, without explicit contextualized practice.

The downside of the latter is just that, with little direction, the courses really require effective self-learners. These courses assume that through the process, learners will develop learning skills, and the philosophical underpinning is that learning is about making the connections oneself.  As was pointed out by Lisa Chamberlin and Tracy Parish in an article, this can be problematic. As of yet, I don’t think that effective self-learning skills is a safe assumption (and we do need to remedy).

The problem with the former is that learners are largely dependent on the instructors, and will end up with that understanding, that learners aren’t seeing how other learners conceptualize the information and consequently developing a richer understanding.   You have to have really high quality materials, and highly targeted assessments.  The success will live and die on the quality of the assessments,  until the social aspect is engaged.

I was recently chided that the learning theories I subscribe to are somewhat dated, and guilty as charged; my grounding has taken a small hit by my not being solidly in the academic community of late. On the other hand, I have yet to see a theory that is as usefully integrative of cognitive and social learning theory as Cognitive Apprenticeship (and willing to be wrong), so I will continue to use (my somewhat adulterated version of) it until I am otherwise informed.

From the Cognitive Apprenticeship perspective, learners need motivating and meaningful tasks around which to organize their collective learning. I reckon more social interaction will be wrapped around the Stanford environment, and that either I’ve not experienced the formal version of the connectivist MOOCs, or learners will be expected to take on the responsibility to make it meaningful but will be scaffolded in that (if not already).

The upshot is that these are valuable initiatives from both pragmatic and principled perspectives, deepening our understanding while broadening educational reach. I look forward to seeing further developments.

UTAOU Sunday mindmap

27 February 2012 by Clark 1 Comment

My mindmap of Sunday’s activities at Up To All Of Us.

20120227-075646.jpg

UTAOU Saturday Mindmap

26 February 2012 by Clark 1 Comment

Here is my mindmap of the group sessions on Saturday from the Up To All Of Us event.

20120226-115747.jpg

Designing the killer experience

6 February 2012 by Clark Leave a Comment

I haven’t been put in the place of having ultimate responsibility for driving a complete user experience for over a decade, though I’ve been involved in advising on a lot on many such.  But I continue my decades long fascination with design, to the extent that it’s a whole category for my posts!  An article on how Apple’s iPhone was designed caused me to reflect.

On one such project, I asked early on: “who owns the vision?”  The answer soon became clear that no one had complete ownership. Their model was having a large-scope goal, and then various product managers take pieces of that, and negotiated for budget, with vendors for resources, and with other team members for the capability to implement their features.  And this has been a successful approach for many internet businesses, project managers owning their parts.

I compare that to the time I led a team, a decade ago developing a learning system, and I laid out and justified a vision, gave them each parts, and while they took responsibility for their part of the interlocking responsibilities, I was responsible for the overall experience.

Which is not to say by any means was I as visionary as Steve Jobs. In the article, he apparently told his iPhone team to start from a premise “to create the first phone that people would fall in love with”.  I like to think that I was working towards that, but I clearly hadn’t taken ownership of such a comprehensive vision, though we were working towards one in our team.

And we were a team.  Everyone could offer opinions, and the project was better because of it.  I did my best to make it safe for everyone’s voice to be heard. We  met together weekly, I had everyone backing up someone else’s area of responsibility, and they worked together as much as they worked with me. In many ways, my role was to protect them from bureaucracy just as my boss’ role was to protect me from interference.  And it worked: we got a working prototype up and running before the bubble burst.

(I remember one time, the AI architect and the software engineer came in asking me to resolve an issue. At the end of it I didn’t fully understand the issue, yet they profoundly thanked me even though we all three knew I hadn’t contributed anything but the space for them to articulate their two viewpoints.  They left having found a resolution that I didn’t have to understand.)

And I don’t really don’t know what the answer is, but my inclination is that giving folks a vibrant goal and asking them to work together to make it so, rather than giving individuals tasks that can compete to succeed.  I can see the virtues of Darwinian selection, but I have to believe, based upon things like Dan Pink’s Drive  and my work with my colleagues in the Internet Time Alliance, that giving a team a noble goal, resourcing them, and giving them the freedom to pursue it, is going to lead to a greater outcome.   So, what do you think?

 

Reviewing elearning examples

30 January 2012 by Clark 13 Comments

I recently wrote about elearning  garbage, and in case I doubted my assessment, today’s task made my dilemma quite clear.  I was asked to be one of the judges for an elearning contest.  Seven courses were identified as ‘finalists’, and my task was to review each and assign points in several categories. Only one was worthy of release, and only one other even made a passing grade.  This is a problem.

Let me get the good news out of the way first. The winner, (in my mind; the overall findings haven’t been tabulated yet) did a good job of immediately placing the learner in a context with a meaningful task.  It was very compelling stuff, with very real examples, and meaningful decisions. The real world resources were to be used to accomplish the task (I cheated; I did it just by the information in the scenarios), and mistakes were guided towards the correct answer.  There was enough variety in the situations faced to cover the real range of possibilities. If I were to start putting this information into practice in the real world, it might stick around.

On the other hand, there were the six other projects.  When I look at my notes, there were some common problems.  Not every problem showed up in every one, but all were seen again and again. Importantly, it could easily be argued that several were appropriately instructionally designed, in that they had clear objectives, and presented information and assessment on that information. Yet they were still unlikely to achieve any meaningfully different abilities.  There’s more to instructional design than stipulating objectives and then knowledge dump with immediate test against those objectives.

The first problem is that most of them were information objectives. There was no clear focus on doing anything meaningful, but instead the ability to ‘know’ something.  And while in some cases the learner might be able to pass the test (either because they can keep trying ’til they get it right, or the alternatives to the right answer were mind-numbingly dumb; both leading to meaningless assessment), this information wasn’t going to stick.  So we’ve really got two initial problems here, bad objectives and bad assessment..

In too many cases, also, there was no context for the information; no reason how it connected to the real world.  It was “here’s this information”.  And, of course, one pass over a fairly large quantity with some unreasonable and unrealistic expectation that it would stick.  Again, two problems: lack of context and lack of chunking.  And, of course, tests for random factoids that there was no particular reason to remember.

But wait, there’s more!  In no case was there a conceptual model to tie the information to.  Instead of an organizing framework, information was presented as essentially random collections.  Not a good basis for any ability to regenerate the information.  It’s as if they didn’t really care if the information actually stuck around after the learning experience.

Then, a myriad of individual little problems: bad audio in two, dull and dry writing pretty much across the board, even timing that of course meant you were either waiting on the program, or it was not waiting on you.  The graphics were largely amateurish.

And these were finalists!  Some with important outcomes.  We can’t let this continue, as people are frankly throwing money on the ground.  This is a big indictment of our field, as it continues to be widespread.  What will it take?

Will tablets diverge?

25 January 2012 by Clark 2 Comments

After my post trying to characterize the differences between tablets and mobile, Amit Garg similarly  posted that tablets are different. He concludes that “a  conscious decision should be made when designing tablet learning (t-learning) solutions”, and goes further to suggest that converting elearning or mlearning directly may not make the most sense.  I agree.

As I’ve suggested, I think the tablet’s not the same as a mobile phone. It’s not always  with you, and consequently it’s not ready for any use.  A real mobile device is useful for quick information bursts, not sustained attention to the device.  (I’ll suggest that listening to audio, whether canned or a conversation, isn’t quite the same, the mobile device is a vehicle, not the main source of interaction.)  Tablets are for more sustained interactions, in general. While they can be used for quick interactions, the screen size supports more sustained interactions.

So when do  you use tablets?  I believe they’re valuable for regular elearning, certainly.  While you would want to design for the touch screen interface rather than mimic a mouse-driven interaction.  Of course, I believe you also should not  replicate the standard garbage elearning, and take advantage of rethinking the learning experience, as Barbara Means suggested in the SRI report for the US Department of Education, finding that eLearning was now superior to F2F.  It’s not because of the medium itself, but because of the chance to redesign the learning.

So I think that tablets like the iPad will be great elearning platforms. Unless the task is inherently desktop, the intimacy of the touchscreen experience is likely to be superior.  (Though more than Apple’s new market  move, the books can be stunning, but they’re not a full learning experience.)  But that’s not all.

Desktops, and even laptops don’t have the portability of a tablet. I, and others, find that tablets are taken more places than laptops. Consequently, they’re available for use as performance support in more contexts than laptops (and not as many as smart or app phones).  I think there’ll be a continuum of performance support opportunities, and constraints like quantity of information (I’d rather look at a diagram on a tablet) constraints of time & space in the performance context, as well as preexisting pressures for pods (smartphone or PDA) versus tablets will determine the solution.

I do think there will be times when you can design performance support to run on both pads and pods, and times you can design elearning for both laptop and tablet (and tools will make that easier), but you’ll want to do a performance context analysis as well as your other analyses to determine what makes sense.

 

 

Stop creating, selling, and buying garbage!

12 January 2012 by Clark 14 Comments

I was thinking today (on my plod around the neighborhood) about how come we’re still seeing so much garbage elearning (and frankly, I had a stronger term in mind).  And it occurred to me that their are multitudinous explanations, but it’s got to stop.

One of the causes is unenlightened designers. There are lots of them, for lots of reasons: trainers converted, lack of degree, old-style instruction, myths, templates, the list goes on. You know, it’s not like one dreams of being an instructional designer as a kid.  This is not to touch on their commitment, but even if they did have courses, they’d likely still not be exposed to much about the emotional side, for instance. Good learning design is not something you pick up in a one week course, sadly.  There are heuristics (Cat Moore’s Action mapping, Julie Dirksen’s new book), but the necessary understanding of the importance of the learning design isn’t understood and valued.  And the pressures they face are overwhelming if they did try to change things.

Because their organizations largely view learning as a commodity. It’s seen as a nice to have, not as critical to the business.  It’s about keeping the cost down, instead of looking at the value  of improving the organization.  I hear tell of managers telling the learning unit “just do that thing you do” to avoid a conversation about actually looking at whether a course is the right solution, when they do try!  They don’t know how to hire the talent they really need, it’s thin on the ground, and given it’s a commodity, they’re unlikely to be willing to really develop the necessary competencies (even if they knew what they are).

The vendors don’t help. They’ve optimized to develop courses cost-effectively, since that’s what the market wants. When they try to do what really works, they can’t compete on cost with those who are selling nice looking content, with mindless learning design.  They’re in a commodity market, which means that they have to be efficiency oriented.  Few can stake out the ground on learning outcomes, other than an Allen Interactions perhaps (and they’re considered ‘expensive’).

The tools are similarly focused on optimizing the efficiency of translating PDFs and Powerpoints into content with a quiz. It’s tarted up, but there’s little guidance for quality.  When it is, it’s old school: you must have a Bloom’s objective, and you must match the assessment to the objective. That’s fine as far as it goes, but who’s pushing the objectives to line up with business goals?  Who’s supporting aligning the story with the learner? That’s the designer’s job, but they’re not equipped.  And tarted up quiz show templates aren’t the answer.

Finally, the folks buying the learning are equally complicit. Again, they don’t know the important distinctions, so they’re told it’s soundly instructionally designed, and it looks professional, and they buy the cheapest that meets the criteria.  But so  much is coming from broken objectives, rote understanding of design, and other ways it can go off the rails, that most of it is a waste of money.

Frankly, the whole design part is commoditized.  If you’re competing on the basis of hourly cost to design, you’re missing the point. Design is critical, and the differences between effective learning and clicky-clicky-bling-bling  are subtle.  Everyone gets paying for technology development, but not the learning design.  And it’s wrong.  Look, Apple’s products are fantastic technologically, but they get the premium placing by the quality of the experience, and that’s coming from the design.  It’s the experience and outcome that matters, yet no one’s investing in learning on this basis.

It’s all understandable of course (sort of like the situation with our schools), but it’s not tolerable.  The costs are high:meaningless  jobs, money spent for no impact, it’s just a waste.  And that’s just for courses; how about the times the analysis isn’t done that might indicate some other approach?  Courses cure all ills, right?

I’m not sure what the solution is, other than calling it out, and trying to get a discussion going about what really matters, and how to raise the game. Frankly, the great examples are all too few. As I’ve already pointed out in a previously referred post, the awards really aren’t discriminatory. I think folks like the eLearning Guild are doing a good job with their DevLearn showcase, but it’s finger-in-the-dike stuff.

Ok, I’m on a tear, and usually I’m a genial malcontent.   But maybe it’s time to take off the diplomatic gloves, and start calling out garbage when we see it.  I’m open to other ideas, but I reckon it’s time to do something.

Performance Architecture

6 January 2012 by Clark 3 Comments

I’ve been using the tag ‘learning experience design strategy’ as a way to think about not taking the same old approaches of events  Ã¼ber ales.  The fact of the matter is that we’ve quite a lot of models and resources to draw upon, and we need to rethink what we’re doing.

The problem is that it goes far beyond just a more enlightened instructional design, which of course we need.  We need to think of content architectures, blends between formal and informal, contextual awareness, cross-platform delivery, and more.  It involves technology systems, design processes, organizational change, and more.  We also need to focus on the bigger picture.

Yet the vision driving this is, to me, truly inspiring: augmenting our performance in the moment and developing us over time in a seamless way, not in an idiosyncratic and unaligned way.  And it is strategic, but I’m wondering if architecture doesn’t better capture the need for systems and processes as well as revised design.

This got triggered by an exercise I’m engaging in, thinking how to convey this.  It’s something along the lines of:

The curriculum’s wrong:

  • it’s not knowledge objectives, it’s skills
  • it’s not current needs, it’s adapting to change
  • it’s not about being smart, it’s about being wise

The pedagogy’s wrong:

  • it’s not a flood, but a drip
  • it’s not knowledge dump, it’s decision-making
  • it’s not expert-mandated, instead it’s learner-engaging
  • it’s not ‘away from work’, it’s in context

The performance model is wrong:

  • it’s not all in the head, it’s distributed across tools and systems.
  • it’s not all facts and skill, it’s motivation and confidence
  • it’s not independent, it’s socially developed
  • it’s not about doing things right, it’s about doing the right thing

The evaluation is wrong:

  • it’s not seat time, it’s business outcomes
  • it’s not efficiency, at least until it’s effective
  • it’s not about normative-reference, it’s about criteria

So what does  this look like in practice?   I think it’s about a support system organized so that it recognizes what you’re trying to do, and provides possible help.  On top of that, it’s about showing where the advice comes from, developing understanding as an additional light layer.  Finally, on top of that, it’s about making performance visible and looking at the performance across the previous level, facilitating learning to learn. And, the underlying values are also made clear.

It doesn’t have to get all that right away.  It can start with just better formal learning design, and a bit of content granularity. It certainly starts with social media involvement.  And adapting the culture in the org to start developing meta-learning.  But you want to have a vision of where you’re going.

And what does it take to get here?  It needs a new design that starts from the performance gap and looks at root causes. The design process then onsiders what sort of experience would both achieve the end goal and the gaps in the performer equation (including both technology aids and knowledge and skill upgrades), and consider how that develops over time recognizing the capabilities of both humans and technology, with a value set that emphasis letting humans do the interesting work.  It’ll also take models of content, users, context, and goals, with a content architecture and a flexible delivery model with rich pictures of what a learning experience might look like and what learning resources could be.  And an implementation process that is agile, iterative, and reflective, with contextualized evaluation.  At least, that sounds right to me.

Now, what sounds right to you: learning experience design strategy, performance system design, performance architecture, <your choice here>?

 

Authentic Learning

14 December 2011 by Clark 4 Comments

This week, #change11 is being hosted by Jan Herrington (who I had the pleasure of meeting in West Australia many years ago; highly recommended). She’s talking about authentic learning, and has a nice matrix separating task type and setting to help characterize her approach.  It’s an important component of making our learning more effective.  On the way home from my evening yesterday, I wrote up some notes about a learning event I attended, that seem to be perfectly appropriate in this context:

I had the pleasure of viewing some project presentations from Hult Business School, courtesy of Jeff Saperstein. It’s an interesting program; very international, and somewhat non-traditional.

In this situation, the students had been given a project by a major international firm to develop recommendations for their social business. I saw five of the teams present, and it was fascinating.    I found out that they balanced the teams for diversity (students were very clearly from around the world including Europe, Asia, and Latin America), and they got some support in working together as well.

Overall, the presentations were quite well done. Despite some small variation in quality and one very unique approach to the problem, I was impressed with the coherence of the presentations and the solidity of the outcomes.  Some were very clearly uniquely innovative new ideas that would benefit the firm.

The process was good too; the firm had organized a visit to their local (world class) research center, and were available (through a limited process) for  questions. A representative of the firm heard the presentations (through Skype!) and provided live feedback. He was very good, citing all the positives and asking a few questions.

Admittedly they had some lack of experience, but when I think how I would’ve been able to perform at that age, I really recognized the quality of the outcome.

This sort of grounded practice in addressing real questions in a structured manner is a great pedagogy.  The students worked together on projects that were meaningful to them both in being real and being interesting, and received meaningful feedback. You get valuable conceptual processing and meta-skills as well. The faculty told me afterward that many of these students had worked only in their home company prior to this, but after this diverse experience, they were truly globally-ready.

How are you providing meaningful learning experiences?

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.