Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Wise technology?

25 September 2018 by Clark Leave a Comment

At a recent event, they were talking about AI (artificial intelligence) and DI (decision intelligence). And, of course, I didn’t know what the latter was so it was of interest. The description mentioned visualizations, so I was prepared to ask about the limits, but the talk ended up being more about decisions (a topic I  am interested in) and values. Which was an intriguing twist. And this, not surprisingly led me back to wisdom.

The initial discussion talked about using technology to assist decisions (c.f. AI), but I didn’t really comprehend the discussion around decision intelligence. A presentation on DA, decision analysis, however, piqued my interest. In it, a guy who’d done his PhD thesis on decision making talked about how when you evaluate the outputs of decisions, to determine whether the outcome was good, you needed values.

Now this to me ties very closely back to the Sternberg model of wisdom. There, you evaluate both short- and long-term implications, not just for you and those close to you but more broadly, and with an  explicit  consideration of values.

A conversation after the event formally concluded cleared up the DI issue. It apparently is not training up one big machine learning network to make a decision, but instead having the disparate components of the decision modeled separately and linking them together conceptually. In short, DI is about knowing what makes a good decision and using it. That is, being very clear on the decision making framework to optimize the likelihood that the outcome is right.

And, of course, you analyze the decision afterward to evaluate the outcomes. You do the best you can with DI, and then determine whether it was right with DA. Ok, I can go with that.

What intrigues me, of course, is how we might use technology here.  We can provide guidelines about good decisions, provide support through the process, etc. And, if we we want to move from smart to  wise decisions, we bring in values explicitly, as well as long-term and broad impacts. (There was an interesting diagram where the short term result was good but the long term wasn’t, it was the ‘lobster claw’.)

What would be the outcome of wiser decisions?  I reckon in the long term, we’d do better for all of us. Transparency helps, seeing the values, but we’d like to see the rationale too. I’ll suggest we can, and should, be building in support for making wiser decisions. Does that sound wise to you?

Labels and roles

12 September 2018 by Clark Leave a Comment

I was just reflecting on the different job labels there are. Some of the labels are trendy, some are indicative, but there’s potentially a lot of overlap. I’m not sure what to do about it, but I thought I’d explore it.  So this is a bit of an unconstructed thought…

To start  somewhere, let’s start with Learning Architect. This is an interesting one (and one I just chose on a project). It leverages the metaphor of the relationship between an architect and the contractor who builds it. The architect imagines the flow of people, places of rest, and creatively evaluates how to match the requirements with the available space (and budget). Then someone else builds it. This is similar to a learning designer, who envisions a learning experience via a storyboard (mapping to a blueprint), before handing off to a developer.

So what is a learning experience designer?  Here is someone envisioning the cognitive (and aesthetic) flow the learner will go through.  It’s looking at addressing the change in knowledge and emotions, as a user experience designer might for an interface.  Whether they build it or not implies they’re a learning experience developer instead/in addition.

Right now I see both as equivalent. An architect is developing the flow of people and their emotions in the space. Where do you want them active, and where do you want them reflective?  The learning experience designer similarly. Are they just different cuts on the same role? I note that in the 70:20:10 process of Arets, Jennings, and Heijnen, learning architect is a role that sits between doing the analysis and implementing the solution.

I also have heard of a learning  strategist.  This could be the same, coming up with a series of tactics to transform the learner into someone with new capabilities.  Or this could be a meta-level, a role I frequently play, reviewing the design process for changes that can maximize the outcomes with a minimum of disruption.

Then there’s learning  engineering, which is in the process of being defined by a committee.  It not only includes the learning science of design, but the technical implementation. Certainly architects and designers need to be aware of the tech, not stipulating the impossible, but this role goes deeper, on to systems integration and more.

Of course, we have the traditional instructional designer, which captures the notion of facilitated learning, but not the integration of the aesthetic component.  And, on the whole, I’m avoiding the ‘developer’ label, as the people who take storyboard to an realized experience.  There are clearly people who have to straddle both (I recently asked an audience how many were sole practitioners in this sense, and a majority seemed to have to design and develop (presumably in the tools that support that).

All these labels may reflect how an organization is dividing up the whole process. I’m not even certain that the way I’ve characterized them is accurate.  What labels am I missing? What nuances?  Does this make sense?

Translational research?

6 September 2018 by Clark 2 Comments

I came across the phrase “Translational Behavior-Analysis”. I had no idea what that was, so I looked it up.  And I found the answer interesting. The premise is that this is an intermediary between academic and applied work.  Which I think of as a good thing, but is it really a  thing? Does it make sense?  I have mixed feelings, so I thought I’d lay them out.

So, one of the things that a few people do is translate research to practice. I’m thinking of folks who are quite explicit about it like Will Thalheimer, Patti Schank, Julie Dirksen, Ruth Clark, and Mirjam Neelen, amongst others.  They’re also practitioners, designing or consulting on solutions, but they can read research in untranslated academese and make sense of it. So is this that?

One definition I found said: “the process of applying ideas, insights, and discoveries generated through basic scientific inquiry to” <applied discipline>.  This is big in medical circles, apparently.  And that’s a good thing, I hope you’d agree.  However, they also say “occupies a conceptual space between basic research and applied research”.  Wait, I thought that  was applied research!

Ok, so further research found this gem: “Applied research is any research that may possibly be useful for enhancing health or well-being. It does not necessarily have to have any effort connected with it to take the research to a practical level.”  Ah, so we can do things in applied research that we think might be good, even if it isn’t connected to basic research. Well, then.  When I think of applied cognition, which has showed up in interface design (and I try to push in learning experience design), I think of that as doing what they call translational, but perhaps it’s not that way in other fields.

Ultimately, this was about fast-tracking medical research into changing people’s lives. And that’s a good thing. And I think our ‘interpreters’ are indeed serving to help take academic research and fast-track it into our learning designs. Will has called himself a ‘translator’ and that’s a good thing.

We also need a way for our own innovations, for instance taking agile software development and applying it to learning design, to filter back to academia and get perhaps a rigorous test. There are people experimenting with VR and other technologies, for instance, and some of the experimentation is “why not this” instead of “theory suggests that”. And both are good.  We may need translators both ways, and I think the channel back to academia is a bit weak, at least in learning and technology. Happy to be wrong about that, by the way!

I’m mindful that we have to be careful about bandwagons. There’s a lot of smoke and hype that makes it easy to distract from fundamentals that we’re still not getting right.  And I’m not sure whether applied or transformational is the right label, but it is an important  role.  I guess I still think that a tight coupling between basic and applied implies translational (I like Reeves’ Design Research as a bridge,  myself), but I’m happy to accept more nuanced views.  How about you?

Transparency isn’t enough

30 August 2018 by Clark Leave a Comment

wet foggy windowOf late, there has been a number of articles talking about thinking and mental models (e.g. this one). One of the outcomes is that we have a lot of stories about how the world works.  Some of them are accurate. Others, not. And pondering this when I should’ve been sleeping, I realized that there was a likelihood that our misinterpretations could cause problems. It made me think that maybe transparency isn’t enough. What does that mean?

We build models, period. We create explanations about how the world works. And they may not be right.  If we aren’t given good ones up front, it’s likely. It’s also the case that they seem to come from previous models we’ve seen. (And diagrams. ;)

Now, it’s easy to misattribute an outcome to the wrong model if we don’t have better explanations. And this comes into play when we’re trying to figure out what has happened, or why something happened. This includes decisions made by others that may affect us, or even just lead to outcomes such as product designs, policies, or more.

Where I’m going is this: if we don’t see the thinking that explains how we got there, not just the process followed, we can infer wrongly about  why it happened. And this is important in the ‘show your work’ sense.

I’m a fan of transparency. I like it when politics and other decisions are scrutable; we can see who’s making the decision, what influences they’ve had, what steps they took to get there. That’s not enough, however. Particularly when you disagree or have a problem. Take LinkedIn, for example; when I connect to someone using the app on the qPad, I can then send them a message, but when I do it through the web interface on my computer, it wants to use one of those precious ‘InMail’s.  It’s inconsistent (read: frustrating). Is there a rationale?

So I’m going to suggest that just transparency is necessary, but not sufficient. You can’t just show your work, you need to show your thinking. You need to see the rationale!  Two reasons: you can learn more when you see the associated cogitation, and you can provide better feedback as well.  In short, we want to see  why they believe this is the right solution. Otherwise, we could question their decision because we misattribute the reasoning.

Transparency is great, but if you can’t see the thinking behind it, you can make wrong inferences.  It’s better if you can see the thinking  and the result. Is this transparent enough on both?

Question: values?

22 August 2018 by Clark Leave a Comment

So, I’m wrestling with how to characterize useful changes in an organization. I’ve been compiling a list of different tactics (e.g. implement coaching, show-your-work, support curation, etc), and want to map them to the changes you’ll get in the organization. I’ve wanted to tie them to another set of various outcomes: improved participation, innovation, etc. But, while I have the strategies, I’m looking at what breakdowns of outcomes are some minimal useful set. I’ll lay out my  very preliminary set of thoughts around the values we’re trying to develop/influence, and I welcome input, pointers, what have you.

My goal, I should be clear, is to try to take specific changes we want in an organization, and have them linked to specific tactics.  And, of course, a new school approach.  That is, tactics that move organizations into directions that create learning organizations.

I start with the three elements Dan Pink talks about in his book  Drive.  In it, he lists three core motivators of employees: Purpose, Autonomy, and Mastery (this is my order, not his).  Purpose is  why what you’re doing matters.  What does this do for the org, and that what the org is doing also matters. Then, autonomy is when you’re given the freedom to pursue your purposes.  Now, you may not be completely capable of that, so there’s support for mastery, to develop the capabilities to succeed. I think these are all great, but are they sufficient in and of themselves? Are these the right things to want to impact?

I’m also a fan of Amy Edmondson’s quadrant model of psychological safety and accountability. Without either, you’re loafing. With just safety, you’re happy. With just accountability, you’re fearful. But if you’ve accountability  and  safety, you get results.  This draws upon the richer work of Garvin, Gino, and Edmondson on the components of innovation.  That model adds time for reflection, diversity, and openness to new ideas. Is this a better way to think about it?

There’re also personal values (which might be organizational, too).  Barack Obama, in his keynote to ATD 2018, had two very simple ones: be kind, and be useful.  I’ve extended that out one notch, to include three: responsibility (do the right thing, and  do something [useful]), integrity (honesty, do what you promise), and compassion (respect, helping, etc [kind]).  Is that a full set? Or is responsibility derivable from integrity? I’ve a collection of a suite of value proposals (five, with entries ranging from 5 – 8 core values).  Can you derive some of the others from the three I have? E.g. does courage come from integrity and responsibility? Does fairness come from compassion and integrity?  I don’t know.

And so, I’m not sure what the  right core set is.  Trust has to be in there somehow, but is that derivative from integrity?  And do I frame it from the change we want in the org, or the change in the people?  I’m inclined to the former.  And are they unitary, or can the tactics impact more than one? (Preliminary: more than one.)

Obviously, I’m at an early stage in formulating this.  I can beaver away on it on my own, but I’m happy to hear pointers, thoughts, etc.  Yes, I’m trying to diagram it too, but nothing coherent has  yet emerged.  So, once again, this is me ‘thinking out loud’.  Care to do similarly and share?

Complex thinking

21 August 2018 by Clark 3 Comments

An interesting article I came across brings up an interesting issue: how do we do complex thinking?  Are some people just better at it?  The short answer appears to be ‘no’.  Instead, a couple of tools play a role, and I think it’s an interesting excursion.

The article says that our brains are limited in thinking about complex situations. Yet, experts can do this.  How? The article cites metaphors as the key, grounding our thinking in models that we’ve developed from our experiences. They draw upon George Lakoff’s work on metaphor (a core aspect of my grad school experience) to explain how our understanding advances.  At core, there’s a fundamental requirement that our knowledge builds upon previous knowledge, which ultimately is grounded in our physical activities.

My PhD thesis topic was thinking with analogy, which shares much with this model. The point being that we use familiar frameworks to make inferences in new areas. We map the familiar to the points in the new that match, and then we extrapolate from the familiar to explain things in the new. And using familiar models as explanatory frameworks are essentially the same process as metaphors. Metaphors tend to be more literal, with a shared point, while analogies go further, and share  structure. The latter is, I’ll suggest, more useful.

Note that the frameworks are built of conceptually-related causal relationships, e.g. models. Thus, when we want to communicate models, we can detail them, but using metaphors or analogies are short-cuts.  When we want someone to be able to understand, particularly to be able to use the reference as a tool to support  doing, we can use them to facilitate comprehension. We want to leverage, as much as possible, pre-existing knowledge.  And people aren’t necessarily great at coming up with analogies (research shows), but they’re good at using them.

Another short-cut that the article cites is diagrams.  Here, we’re making visible the relationships, supporting the understanding. Equations can get specific, but conceptual understanding is facilitated by seeing the connections.

The important outcome is that we all have our cognitive limitations to overcome, but we’ve also developed powerful tools to support these limitations. To the extent we understand how these tools support learning, we can use them to help achieve the outcomes we need.  We  can do complex thinking, with the right tools. Are you facilitating success by leveraging these tools?

Old and new school

8 August 2018 by Clark Leave a Comment

As I mentioned in yesterday’s post, I was asked for my responses to questions about trends.  What emerged in the resulting article, however, was pretty much contrary to what I said. I wasn’t misquoted, as I was used to set the stage, but what followed wasn’t what I said. What I saw was what I consider somewhat superficial evaluation, and I’d like to point to new school thinking instead.

So the article went from my claim about an ecosystem approach to touting three particular trends. And yet, these trends aren’t really new and aren’t really right!  They were touting mobile, gamification, and the ‘realities. And while there’s nothing wrong with any of them, I had said that I didn’t think that they’re the leading trends.

So, first, mobile is pretty much old news. Mobile first?  Er, it‘s only been 8 years or so (!) since Google declared that! What‘s cool about  mobile, still, is sensors and context-awareness, which they don‘t touch on.  And, in a repeated approach, they veered from the topic to quote a colleague. And my colleague was spot on, but it wasn’t in the least about mobile!  They ended this section talking about gamification and AR/VR, yet somehow implied that this was all about mobile. That would be “no”.

Then they talked about users wanting to be active.  Yay!  But, er, again they segued off-topic, taking personalization before going to microlearning and back to gamification and game-based learning(?).  Wait, what?  Microlearning is an ill-defined concept, and conflating it with game-based learning is just silly.  And games are real, but it‘s still hard to do them (particularly do them right, instead of tarted up drill-and-kill).  Of course, they didn‘t really stay on topic.

Finally, the realities. Here they stayed on topic, but really missed the opportunity. While AR and VR have real value, they talked about 360 photography and videography, which is about consumption, not interaction. And, that‘s not where the future is.

To go back to the initial premise – the three big trends – I think they got it wrong.  AI and data are now far more of a driver than mobile. Yes, AR/VR, but interaction, not just ‘immersion‘.  And probably the third driver is the ecosystem perspective, with systems integration and SaaS.

So, I have to say that the article was underwhelming in insight, confused in story, and wrong on topic. It’s like they just picked a quote and then went anywhere they wanted.   It’s old school thinking, and we’re beyond that. Again, my intention is not to continue to unpack wrong thinking (I’m assuming that’s not what you’re mostly here for, but let me know), but since this quoted me, I felt obliged.  It’s past time for new school thinking in L&D, because focusing on content is, like,  so last century.

Trends in L&D

7 August 2018 by Clark Leave a Comment

I agreed to be interviewed for an article, and was sent questions. And I wrote what I thought were cogent answers.  I even dobbed in a couple of colleagues to also be interviewed. However, the resulting article isn’t what I expected at all. Now, I don’t  intend to make all my posts critiques of what’s being said, but sometimes I guess I just can’t help myself!  So first, here’re my original answers.  In my next post, I’ll document the article’s claims, and my rejoinders about what I think are the driving trends in L&D.

The original questions and responses:

How has our thinking evolved on using technology to assist in learning and development?

Thinking around technology for Learning & Development has shifted from delivering ‘courses‘ to looking at the entire learning and performance ecosystem where technology can not only help us perform in the moment but also develop us over time. This adds performance support, resources and portals, and communication and collaboration tools to support learning alone and together from formal through to informal learning. We‘re recognizing that to move forward, organizations that can learn fastest are the ones most likely to not just survive but thrive. However, this goes beyond the tools and the people to the structures, values, and culture that underpin practices.

Do you think the current systems in use for L&D are adequate? If not, why so?

The legacy of the training mentality is keeping us mired in the past. I think that adding portal and social media capabilities to systems with a ‘course‘ DNA isn‘t the path forward. Instead, we should be looking to integrate capabilities from the best instances in every area. We want flexibility to switch tools if we find better solutions to specific needs, not one overworked legacy system. An LMS (learning management system; misnamed because you don‘t manage learning) may well still be of use to manage courses and signups, but it‘s the wrong foundation for the more agile future we need. Supporting curation and creation, and negotiating shared understandings are the learning that‘re going to be most valuable, and that requires not just different tools, but a different mindset. It‘s time to shift from delivery to facilitation.

What technology-assisted learning tools do you think hold the most potential?

Collaborative tools are the most important tools: the ability to collectively generate and manipulate representations that document how our thinking evolves are important. Such tools that support simultaneous and asynchronous work and communication will be key to the ongoing learnings that will propel organizations forward. New tools like VR can lead to deeper formal learnings, and AR will help both as performance support and annotating the world, but collaborative immersion and annotation fit into that first category. When we‘re developing an understanding together, we‘re creating the richest outcome. There are nuances in doing that right, and that‘s part of L&D‘s role too, but it‘s about tapping into the power of people. Technology that facilitates learning together is what will have the biggest impact.

What do you think is next for learning tech? Is there a huge shift coming?

I think the biggest thing coming for learning tech isn‘t the tech. The ICICLE initiative from IEEE that is defining ‘learning engineering‘ is a big move to start getting smarter about integrating the two components: learning science and technology design and development. Too often learning science is ignored (c.f. ‘rapid elearning‘) or the technical sophistication is missing (e.g. tracking done only at the ‘course‘ level). I think that once we get our minds around the importance of the integration, we‘ll be far better positioned to tap into the advancements we‘re seeing. While I think the hype about Artificial Intelligence is overblown, ultimately I believe that we‘ll have more powerful tools to automate what doesn‘t require the sophisticated capabilities of our brains, freeing us up to do the important work. And that work will be collaborating to generate new understandings. I do think there‘ll be a big shift, but it‘ll be coming along slowly. I hope this shift happens, but I think it‘s evolutionary, as change is hard.

Ok, so that’s what I said about the trends in L&D. What you will see is that what they presented is somewhat contrary to what I said here!

Distributed Cognition

24 July 2018 by Clark Leave a Comment

In my last post, I talked about situated cognition.  A second, and related, cognitive revelation is that thinking is distributed between our heads  and the world. That is, the model that it all occurs between the ears doesn’t recognize that we incorporate external representations are part of our processing. Hutchins, in his  Cognition in the Wild, documented a variety of ways that our thinking is an artefact of our tools  and our models.

So, for example, navigation typically involves maps as well as thinking. Business reasoning is typically accompanied by tools like a spreadsheet. We use diagrams, tables, graphs, charts, and more to help us understand situations better. And we are unlikely to be able to do things like long division without paper and pencil or a calculator. This means that putting everything in the head isn’t necessary. And this is just what we  should be doing!   Designing for the right distribution of tasks between world and mind(s) is the optimal solution.

We know that it’s difficult to get things in the head (how hard is it to learn, say, to drive), and therefore undesirable anyway.  It’s about designing solutions that put into the world what  can be in the world, and then putting into the head  what  has to be in the head. This includes performance support in a variety of ways. It also should address what we consider to be worth training.

When we want to optimize performance, we should recognize that we need a bigger picture. We need to consider the person & tools, or people & tools, as a whole entity when it comes to achieving the end goal.  This is also true for learning. Our reflective representations are part of our thinking process. So, too, our collaborative representations.

We are better thinkers  and  learners when we consciously consider tools, and their availability in the ecosystem. In fact, our ecosystem  is the tools and people we have ‘to hand’, accessible in or from the workflow. And elsewhere, in our times for reflection, and discussion. So, have you optimized your, and your organization’s thinking and learning toolset?

Situated Cognition

18 July 2018 by Clark 5 Comments

In a recent article, I wrote about three types of cognition that are changing how we think about how we think (how meta!).  All are interesting, but they also have implications for understanding for supporting us in doing things.  I think it’s important to understand these cognitions, and their implications. First, I want to talk about situated cognition.

The psychological models of thinking really started with the behavioral models. The core argument was that we couldn’t look ‘inside the box’, and had to study inputs and outputs. Cognitive psychology was a rebellion from this perspective. The new frameworks started showing that we could posit quite a bit about what went on ‘in the box’. We got concepts like sensory, working, and long-term memory, and processes like attention, rehearsal, encoding, and retrieval. With most of our learning prescriptions. However, both were about the ‘the box’.

However, the observed behavior didn’t match the formal logical reasoning that underpinned the model. We needed new explanations. The computational model fell apart. And, despite rigorous attempts to create logical models that described human behavior, they were awkward at best. The shift came when Rumelhart & McClelland, in their PDP book, described what became known as neural networks. Associated with this was a new model of cognition.

What gets activated in the brain is not a reliably pure representation, and is strongly affected by the context. Thinking is ‘situated’ in the context it arises in. If our thinking is the emergent behavior of patterns across neurons, and those patterns are the result of both internal and external stimuli, then we’re very strongly influenced by what’s happening ‘in the moment’.  And that means that we can be captured (and fooled) by elements that may not even be consciously processed.

What this means in practice is that it’s harder than we think to get reliable performance across a range of conditions.  That we should ensure that patterns are generated across ‘noise’ so that they’re reliable in the face of the appropriate triggers, despite any accompanying contextual patterns. and recognize that decisions can be biased, and design scaffolding to prevent in appropriate outcomes. Developing mental models that provide reasoning abilities about causes and outcomes are useful here. This flexibility is advantageous (and why machine learning struggles outside it’s range of training), but we want to tap into it in helpful ways.

Our approaches should reflect what’s known, and therefore we need to keep up.  Situated cognition is a perspective that’s relevant to more effectively supporting individual and organizational performance and learning.  So, what is  your thinking about this?

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.