Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Layering learning

8 September 2011 by Clark 3 Comments

Electronic Performance Support Systems are a fabulous concept, as pioneered by Gloria Gery back in the early 90’s.  The notion is that as you use a system, and have entries or decisions to make, there are tools available that can provide guidance: proactively, intelligently, and context-appropriate.  Now, as I heard the complaint at the time, this would really be just good interface design, but the fact is that many times you have to retrofit assistance on top of a bad design for sad but understandable reasons.

The original were around desktop tasks, but the concept could easily be decoupled from the workplace via mobile devices.  One of my favorite examples is the GPS system: the device knows where you are, and where you want to go (because you told it), and it gives you step by step guidance, even recalculating if you make a change.  Everything from simple checklists to full adaptive help is possible, and I’ve led the design of such systems.

One of the ideas implicit in Gery’s vision, however, that I really don’t  see, is the possibility of having the system not only assist you in performing, but also help you learn. She talked about the idea in her book on the subject, though without elaborating how that would happen, but her examples didn’t really show it and I haven’t seen it in practice in the years since.  Yet the possibility is there.

I reckon it wouldn’t really take much. There is (or should be) a model guiding the decisions about what makes the right step, but that’s often hidden (in our learning as well).  Making that model visible, and showing how it guides the support and recommendations that are made, could be made available as a ‘veneer’ over the system. It wouldn’t have  to be visible, it could just be available at a click or as a preference for those who might want it.

Part of my vision of how to act in the world is to ‘learn out loud’. Well, I think our tools and products could be more explicit about the thinking that went into them, as well.  Many years ago, in HyperCard, you could just use buttons and field, but you could open them up and get deeper into them, going from fixed links to coded responses.  I have thought that a program or operating system could work similarly, having an initial appearance but capable of being explored and customized.  We do this in the real world, choosing how much about something we want to learn (and I still want everyone  who uses a word processor to learn about styles!) about something. Some things we pay someone else to do, other things we want to do ourselves. We learn about some parts of a program, and don’t know about others (it used to be joked that no one knows everything about Unix, I feel the same way about Microsoft Word).

We don’t do enough performance support as it is, but hopefully as we look into it, we consider the possible benefits of supporting the performance with some of the underlying thinking, and generating more comprehension with the associated benefits that brings. It’s good to reflect on learning, and seeing how thinking shapes performance both improves us and can improve our performance as well.

Levels of analysis

26 July 2011 by Clark Leave a Comment

When I was a grad student, a fellow student did an interesting study.  In analogical reasoning, what helps is abstracting from the specifics to the more general (and folks are bad at generating good analogies, though okay at using them, according to my PhD and other research).  Folks had made efforts at getting abstraction, and failed. What my fellow student did was to control the abstraction, and got useful outputs.  It turns out some abstract too far, and of course in general most don’t go far enough.

From that beginning, I’ve been interested in useful mental models, and good analysis from appropriate levels of abstraction. That’s what I have tried to do in my books: abstract to useful levels, and guide application in pragmatic ways.  And that’s what I look for in other’s work as well.  My PhD advisor has served as an excellent model: Don Norman’s book Design of Everyday Things is still a must-read for anyone designing for humans, and his subsequent books have similarly provided valuable insight.

I like the thinking of a number of folks who do this well.  For instance, I’m regularly learning with my Internet Time Alliance colleagues (Jay, Jane, Harold, and Charles).  Jane Bozarth, Marc Rosenberg, Allison Rossett, Will Thalheimer, Marcia Conner, and Donald’s Clark & Taylor are just a few of the folks who cut through the hype with incisive thinking. There are of course others I’m forgetting to mention (my apologies).  They’re looking for best principles, not best practices.

It’s a similar thinking that helps break down new technologies and finds the key affordances for learning, avoiding other intriguing but ultimately distracting features (Powerpoint presentations in Second Life, anyone?).  You need to look a bit deeper than the surface.

Interestingly, to do so really requires taking time for reflection.  Which is why it always frustrates me to hear those folks who say “I don’t have time for reflection”.  Really?  You don’t have time to do the most valuable level of thinking that will impact your effectiveness and ultimately save you time and money?

And can we please put this process into our school curriculum as well?  I benefited mightily by having a 12th grade AP English teacher (that’s you, Dick Bergeron) who modeled deeper thinking and used reciprocal teaching (without having that label) to help us develop our own abilities.  While I try to do so for my own kids, our society and world needs more folks thinking at useful levels.

So, please, take time and a step back from your day to day problem-solving and abstract across your activities and look for higher level principles, both emergent and external, that can improve what you’re doing.

 

Integration (or not)

14 June 2011 by Clark 1 Comment

I’ve recently been asked about what industries are leading in the use of (choose one: mobile, games, social).  And, in my experience, while there are some industries (medicine in mobile, for example), it’s more about who’s enlightened enough yet.  Which made me think a little deeper about what I do, and don’t see.

What I do see are pockets of innovation. This company, or this manager, or this individual, will innovate in a particular area.  Chris Hoyt has innovated in social learning for recruitment for PepsiCo, and is now branching out into mobile.  One company will do games, another mobile, another social. And that’s ok as a starting point, but there’s more on the table.  You want to move from tactics to strategy.

Performance EcosystemI want to suggest it’s better if someone higher up sees that tying the elements together into a coherent system is the larger picture.  You don’t just want the individual tactics, but you want to see them as steps towards the larger picture.  At the end of  the day, you want your systems tied together in the back end, providing a unified environment for performance for the individual.  And that takes a view of where you’re going, and the appropriate investment and experimentation.

I recall (but not the link, mea culpa) a recent post or article talking about the lack of R&D investment in the learning space (let me add, the performance space overall).  That is, folks aren’t deliberately setting aside monies to fund some experimentation around learning.  Every learning unit should be spending 3-5% of the budget on R&D.  Is that happening?  If so, it’s not obvious, but I’m happy to be wrong.

I really struggle to find an organization that I think is getting on top of this in a systematic way: that has realized the vision, is aligning tactics to organizational outcomes, and is looking to integrate the technologies in the backend to capitalize on investment in content systems, social media systems, portal technologies, and learning management systems.  This can also be customer-facing as well, so that you’re either meeting customer learning needs around other products or services, or delivering learning experiences as a core business, but still doing so in a coherent, comprehensive, and coordinated approach.

I am working with some folks who are just starting out, but I think the necessity to link optimal execution with continual innovation is going to require much more thorough efforts than I’m yet seeing.  Am I missing someone?  While I love to hear about exemplary individual efforts, I’d really like to hear from those who are pulling it all together as well.

Beyond Talent

16 May 2011 by Clark 1 Comment

A post I wrote for the ATC conference:

As I prepare to talk to the Australasian Talent Conference I’ve naturally been thinking about the intersection of that field and what I do. As I recently  blogged, I think there’s an overlap between OD and the work of trying to facilitate organizational performance through technology. I think Talent Management  similarly has an overlap.

While technology is used in talent management, it really is more focused on the management part, supporting the role of HR in recruitment, competencies, and more. Which  is good, but now there’s more on the table.  We now have the benefits of Web 2.0 to leverage. To understand how, it helps to look at the charateristics of Web 2.0.  Brent Schlenker talks about the 5-ables:

  • findable – the ability to use search to find things
  • feedable – the ability to subscribe to content
  • linkable – the ability to point to content
  • taggable – allowing other to add descriptors
  • editable – allowing others to add content

At core, this is about leveraging the power of the network to get improved outcomes. When others can add value, they do. We have seen that in learning and development, and the drivers there are not unique to the area.

Things are moving faster, and information is increasing. Worse, that information is more volatile, as well. As if that weren’t enough, competition is increasing.  The luxury to plan, prepare, and execute is increasingly a thing of the past.  As a consequence, optimal execution is only the cost of entry, and continual innovation is the necessary differentiator.

As a result, the old top-down mentality is no longer a solution, one person can not do all the necessary thinking for a team. Instead, forward-thinking organizations are finding the solution in empowering their people to work together to come up with the necessary solutions. They are devolving problem-solving, research, design, innovation further down in the organization, and realizing real results from the process. Instead of having to own all the content, learning units are instead facilitating the development of answers from among the stakeholders.

Note that by doing so, organizations are also making work more meaningful and consequently more rewarding. As Dan Pink’s Drive demonstrates, individuals are more motivated by the opportunity to engage than by artificial rewards. And these results are not unique to high-tech, but being seen in organizations engaged in manufacturing, medicine, and more.

This revolution can, and should, be seen in talent management as well. Throughout the lifecycle of talent, the network can add value. Beyond recruiting, networks can be used for talent evaluation, and then within the organization for onboarding, development, performance management, and even debriefing and alumni activities.

The point is to think about how to tap into the power of people. And even when you are now hiring people, you are not just hiring what is in  their heads, but what’s also in their networks. Similarly, they are choosing organizations on how well they use networks. As the Cuetrain Manifesto documented, an organization can no longer control the message. If an organization is inauthentic externally, it is a safe bet that it is similarly dysfunctional internally.

Social media is much more than just marketing, it’s a tool to take advantage of for many reasons. More meaningful work, better outcomes, and a better connection to the market are just the top level benefits. Social, it’s not just for parties any more.

 

On Competencies and Compliance

3 May 2011 by Clark 4 Comments

While my colleagues in the ITA and I are railing against the LMS as a complete solution for organizational performance (and the vendors rally back with their move beyond course management with social and portal capabilities, to be fair), one overriding cry is heard: “but we have to do compliance!”  And, yes, they do. But that umbrella covers a multitude of sins as well as some real importance.

So, for the record, I acknowledge that I want procedures followed when lives are on the line and other cases where it’s important.  Yes, I do want oil well procedures followed, ethics in financial transactions, careful scrutiny of pharmaceutical research,  harassment-free workplaces,  and more.   I like that there are procedures for pre-flight safety, medical sanitation, etc.  So don’t get me wrong.

What I am concerned about, however, are two things.  For one, as I see the effectiveness of classes ranging from very practical guidance to ridiculously useless knowledge tests.  Let’s be clear, telling someone about something and having them recite back the knowledge isn’t going to lead to meaningful change in behavior.  An expert in emotional intelligence told me that most of the workplace bullying interventions are worthless, as the person responds appropriately to the information on a post-class test, but then goes back to the workplace and continues to misbehave.  That’s a waste of time and money.

For another, the criteria are often knowledge based, not performance-based.  We can make meaningful tests, either computer-administered (simulations), or real performance.  What doesn’t work are knowledge tests.  And LMSs don’t care what the form of assessment is, if it can be recorded.

What we should be looking for are competency assessments, based upon real performance, not knowledge test.  Certainly, pilots have to perform appropriately, as do surgeons. They are measured by real performance.    It’s not about courses.  If they can’t perform, then there are knowledge resources, whatever might be helpful, but it’s not like they have to take a course, unless they want to.

And the standards change over time as new procedures and tools come in.  BTW, how does that adaptation happen?  Not by one person decreeing it so, but panels of experts coming up with new proposals, testing, and refinement.  A social process, with criteria of their own about acceptable standards.  And not measured by seat time, poundage, or any thing other than the ability to reliably demonstrate capability.

Now I’m going to sound far-fetched here, but in the long term, I see communities developing the criteria and competencies collaboratively, and the assessment mechanisms as well.  The tools will exist for communities to pass up ideas, for experts to review and revise the criteria, and for the process to be transparent to governmental and public scrutiny.  We need better and more meaningful competency development and testing.  That’s what I’d like us all to comply with.

Think like a publisher

2 May 2011 by Clark Leave a Comment

Way  back when we were building the adaptive learning system tabbed Intellectricityâ„¢, we were counting on a detailed content model that carved up the overall content into discrete elements that could be served up separately to create a unique learning experience.  As I detailed in an article, issues included granularity and tagging vocabulary.  While my principle for the right level of granularity is playing a distinct role in the learning experience, e.g. separating a concept presentation from an example from a practice element, my more simple heuristic is to consider “what would a knowledgeable mentor give to one learner versus another”. The goal, of course, is to support future ability to personalize and customize the learning experience.

Performance Ecosystem

Back then, we were thinking then as a content delivery engine, but our constraints required content produced in a particular format, and we were thinking about how we’d get content produced the way we needed.  Today, I’m still thinking that the advantages of content produced in discrete chunks, under a tight model, is a valuable investment in time and energy.  Increasingly, I’m seeing publishers taking a similar view, and as new content formats get developed and delivered (e.g. ebooks, mobile web), the importance of more careful attention to content makes sense.

The benefits of more careful articulation of content can go further. In the performance ecosystem model (PDF), the greater integration step is specifically around more tightly integrating systems and processes.  While this includes coupling the disparate systems into a coherent workbench for individuals, it also includes developing content into a model that accounts for different input sources, output needs, and governance.  While this is largely for formal content, it could be community-generated content as well.  The important thing is to stop redundant content development.  Typically, marketing generates requirements, and engineering develops specifications, which then are fed separately to documentation, sales training, customer training, and support, which all generate content anew from the original materials.  Developing into and out of a content model reduces errors and redundancy, and increases flexibility and control.  (And this is not incommensurate with devolving responsibility to individuals.)

We’re already seeing the ability to create custom recommendations (e.g. Amazon, Netflix), and companies are already creating custom portals (e.g. IBM).  The ability to begin to customize content delivery will be important for customer service, performance support, and slow learning.  Whether driven by rules or analytics (or hybrids), semantic tagging is going to be necessary, and that’s an concomitant requirement of content models.  But the upside potential is huge, and will eventually be a differentiator.

Learning functions in organizations need to be moving up the strategic ladder in terms of their overall responsibility for more than just formal learning, but also performance support and ecommunity.  Thinking like advanced publishers can and should be about moving beyond the text, and even beyond content, to the experience.  While that could be custom designs (and in some cases it must be, e.g. simulation games), for content curators and providers it also has to be about flexible business models and quality development.  I believe it’s a must for other organizations as well.  I encourage you to start thinking strategically about content development in rich ways that stop with one-off development, and start thinking about putting some up-front effort into not only templates, but also models with tight definitions and labels.

Org Development and Social Media

27 April 2011 by Clark 1 Comment

On principle (and for pragmatic reasons), I regularly think about how to define what I do, and to look for areas that are related.  As a consequence, I wonder if there’s another area I’m falling into, and more importantly, an interesting intersection that might warrant some exploration.

With my ITA colleagues, I’ve been looking at how to help organizations broaden the scope of the learning function to include informal and social learning, and leverage them to make organizations more successful.  And, given that it’s not about the technology, it ends up being a lot about how to create environments where social media can be used effectively.  This led me to wonder what was the proper category for that work. Is it business information systems?  However, that seems largely to be about databases. Is it industrial/organizational psychology?  That largely seems too focused on the individual, and on psychometrics.  That’s when I looked into organizational development (OD; as our associate, Jon Husband has champions with his wirearchy work).

If you read the definition of OD, you see “effort to increase an organization’s effectiveness and viability”. That’s largely what we’re on about, too.  As the Working Smarter Fieldbook says:  “We foresee a convergence of the ‘people disciplines’ in organizations. As the pieces of companies become densely interconnected, the differences between knowledge management, training, collaborative learning, organization development, internal communication, and social networking fade away.”   However, some of these fields are reasonably technology savvy, while others are more focused effective people processes.

As I look through the suite of approaches that OD takes, it feels very familiar.  Workshops, facilitations, the interventions used resonate very comfortably with what I’ve used and seen work.  The goals are also very similar.  However, I don’t see a lot of awareness or interest in technology.  I’m wondering if I’m missing a huge swath of work in leveraging technology to facilitate organizational development.  Or whether there’s a need and opportunity to start some cross-talk and look at the intersection for opportunities to leverage technology as a tool to increase an organization’s effectiveness and viability.  Kevin Wheeler, who’s organizing the talent event I’m presenting at in Sydney and has background in OD, opted for the latter.  What do you think?

Mentoring Results

18 April 2011 by Clark Leave a Comment

Eileen Clegg from the Future of Talent Institute (and colleague, we co-wrote the Extremophiles chapter for  Creating a Learning Culture)  pinged me the other day and asked about my thoughts on the intersection of:

  1. The new role of managers in the results-oriented work environment (ROWE)
  2. The  topic of  blending the Talent and Learning functions in the workplace.

She’d been excited about Cognitive Apprenticeship years ago after hearing me talk about it, and wondered if there was a role to play. I see it as two things: orgs need optimal execution just as the cost of entry: that’s where apprenticeship fits in, but they also need continual innovation. That needs collaboration, and we are still exploring that, though there are some really clear components.  Though one of the nice things about cognitive apprenticeship is that it naturally incorporates collaborative learning, and can develop that as it develops understanding of the domain.

I admit I’m a little worried about ROWE from the point of view that Dan Pink picks out in  Drive, about how a maniacal focus on results could lead to people doing anything necessary to achieve results. It’s got to be a little more about taking mutual ownership (producer and whoever is ‘setting’ the result) that the result is meeting the org need in a holistic (even ‘wise’ way).

What has to kick in here is a shared belief in a vision/mission that you can get behind, individuals equipped to solve problems collaboratively (what I call big L learning: research, design, experimentation, etc), and tools to hand for working together. You apprentice both in tasks *and* learning, basically, until you’re an expert in your domain are defining what’s new in conjunction with your collaborators.

Expressed by my colleague was a concern that there was a conflict between”(a) supporting someone’s learning and (b) being invested in the success of their work product”. And I would think that the management is NOT directly invested in the product, only in the producer.  Helping them be the best they can be and all that.  If they’re not producing good output, they either need to develop the person or replace them, which indirectly affects the product.  However, this isn’t new for mentors as well: they want their charges to do well, but the most they can do is influence the performer to the best of their ability.

As a component, learners need to develop their PKM/PLN (personal knowledge management, personal learning network). And 21st century skills aren’t taken for granted but identified and developed. In addition, the performance ecosystem, aka workscape – not only formal learning but also performance support, informal learning, and social learning – is the responsibility of the integrated talent/learning functions (which absolutely should be blended).  And ‘management’ may move more toward mentorship, or be a partner between someone strategizing across tasks and a talent development function in the organization.

As an extension to my ‘slow learning’ model, I think that the distinction between learning and performing from the point of view of support needs to go away. We can and should be concerned with the current performance and the long-term development of the learner at the same time.  Thus, the long term picture is of ongoing apprenticeship towards mutually negotiated and understood goals, both work and personal development.

Me, ‘to go’ and on the go

14 April 2011 by Clark Leave a Comment

Owing to a busy spring pushing the new book on mobile, I’ve been captured in a variety of ways. If you haven’t already seen too much of me talking mobile, here are some of the available options:

  • Cammy Bean did an audio interview of me for Kineo (cut into sensible size chunks)
  • Terrance Wing and Rick Zanotti hosted me for a #elearnchat video interview
  • I also have given a series of webinars on mobile for a variety of groups, here’s a sample.

Also, with the Internet Time Alliance, we gave a webinar on Working Smarter.

Coming up in the near future:

As I mentioned before, I’ll be in Sydney for the Australasian Talent Conference talking games and social learning, and workshopping mobile and elearning strategy.

In addition, however, I’ll also be running a deeper ID session and then a game design workshop on the same trip with Elnet on the 30th and 31st of May and an event at the University of Wollongong (more soon).

In June, I’ll be presenting at the DAU/GMU Innovations in eLearning conference that’s always been an intimate and quality event.

Also in June, I’ll be running my mobile design workshop and presenting on several different topics at the eLearning Guild’s exciting new mobile learning conference, mLearnCon.

And I’ll be participating virtually with a mobile  event with the Cascadia Chapter of ASTD also in June.

In August, I’m off to Madison Wisconsin to keynote the Annual Conference on Distance Teaching and Learning, as well as running a pre-conference workshop.

There’s more to come:

  • The CSTD annual conference in November in Toronto.
  • The Metro DC ASTD chapter in November as well.
  • Other things still on the bubble; stay tuned!

All of these events have great promise regardless of my participation, and I encourage you to check them out and see if they make sense to you. If you attend one, do introduce yourself (I’m not aloof, just initially shy).  Hope to catch up with you somewhere.

Learning Experience Design thru the Macroscope

7 April 2011 by Clark 11 Comments

Our learning experience design is focused, essentially, on achieving one particular learning objective.  At the level of curricular design, we are then looking at sequences of learning objectives that lead to aggregate competencies.  And these are delivered as punctate events.  But with mobile technologies, we have the capability to truly start to deliver what I call ‘slow learning’: delivering small bits of learning over time to really develop an individual.  It’s a more natural map to how we learn; the event model is pretty broken.  Most of our learning comes from outside the learning experience.  But can we do better?

Really, I don’t think we have a handle on designing and delivering a learning experience that is spaced over time, and layered over our real world activities, to develop individuals in micro bits over a macro period of time rather than macro bits over a micro bit of time (which really doesn’t work).  We have pieces of the puzzle ( smaller chunks, content models) and we have the tools (individualized delivery, semantics), but putting them together really hasn’t been done yet.

Conceptually, it’s not hard, I reckon.  You have more small chunks of content, and more distributed performance model. You couple it with more self-evaluation, and you design a system that is patiently persistent in assisting people and supporting them along.  You’d have to change your content design, and provide mechanisms to recognize external content and real performance contexts as learning experiences.  You’d want to support lots of forms of equivalency, allowing self-evaluation against a rubric to co-exist with mentor evaluation.

There are some consequences, of course.  You’d have to trust the learner, they’d have to understand the value proposition, it’s a changed model that all parties would have to accommodate.  On the other hand, putting trust and value into a learning arrangement somehow feels important (and refreshingly different :).  The upside potential is quite big, however: learning that sticks, learners that feel invested in, and better organizational outcomes.  It’s really trying to build a system that is more mentor like than instructor like.  It’s certainly a worthwhile investigation, and potentially a big opportunity.

The point is to take the fact that technology is no longer the limit, our imaginations are. Then you can start thinking about what we would really want from a learning experience, and figure out how to deliver it.  We still have to figure out what our design process would look like, what representations we would need to consider, and our associated technology models, but this is doable.  The possibility is now well and truly on the table, anyone want to play?  I’m ready to talk when you are.

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.