Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Diagramming Microlearning

21 March 2018 by Clark 5 Comments

I’ll be giving an upcoming webinar where I make my case for defining microlearning.  And, as part of my usual wrestling with clarity, I created a diagram. I thought I’d share it with you.

Microlearning

What, you want me to walk you through it? :)

Microlearning is a portmanteau (technical term:  mashup) of micro and learning. Thus, it implies small bits of learning.  Here I’m mapping it out in several ways.  I’ve previously argued that there are three main ways, but let’s map the first two out. It’s either

  • a series of objects contributing to a learning experience
  • a one-off object that creates learning

And there are problems with both. Too often, folks talk about breaking an existing course down into small chunks, and I suggest that won’t work without some significant (!) effort.  Just breaking it up means something seen earlier can be forgotten, so you need to worry about knowledge atrophy and plan reactivation. And that’s just spaced learning!

And I think it’s unlikely that you can have a single object lead to any meaningful learning. However, such an object  can serve as support to succeed in the moment.  How-to videos, job aids, and the like all can be used to achieve an outcome.  And that’s just performance support!

What I think is the real untapped opportunity that could (and I say should) capture the moniker would be contextualized learning. Layering on a bit of learning  because of when and where you are that develops you over time. It’s combining the two, potentially, so you help someone in the moment but add in the bit that also makes it a learning experience.  There’s much more to this, but that’s the core idea.

My main issue here is that people are fast and loose with the term microlearning, and I’d like to make sure people are  either talking about spaced learning  or performance support (both good). And not talking about just breaking up a course into chunks that are nice to consume but not engineered to lead to retention and transfer (not so good).  Of course, better yet if we converge on contextualized learning!

Possible versus practical

28 February 2018 by Clark Leave a Comment

Last week, I gave a presentation to the local chapter of ATD. And I was surprised that their request was for mobile learning. Now that  is something I can speak to, but given that my book on the topic came out seven years ago now, it seemed like a dated topic. And I was wrong.  And the difference is between what’s possible and what’s practical.

Ok, so I am somewhat out ahead of the curve.  My games book came out in 2005, but the market wasn’t quite ready.  I similarly think my L&D revolution book, in 2014, was ahead of the market (the topic is finally getting more traction, close to four years later), though closer. But I thought the mlearning book was timely (not least because my publisher asked for it more than it was my initiative ;).

However, the audience was eager.  And it was relatively large for the group.  And it took a comment from the organizer to raise my awareness.  He said (and I paraphrase): “you think that it’s old, but it’s not old for everyone”. And that was indeed a wakeup call.  Because while mobile to me is very practical, for many it’s still possible.

I  do tend to move on once I reckon I’ve figured something out. I’m interested when it’s still something to be understood or solved. Once I have my mind around it, my restless brain is on to something new.  That’s why I have this blog, for instance, to wrestle with new thoughts. If they get organized enough, it becomes a presentation or even a book.  (Though sometimes I do ones that are requested, e.g. my forthcoming one on myths, and I’m supposed to be reviewing the second round of proofs!)

But the interesting thing to me is to look beyond my own bubble (and what my colleagues are talking about).  We’re looking at what’s possible but not yet done, or what’s on the horizon. Yet I need to remember to continue to tout what’s now on the menu, and recognize not everyone’s yet started moving.  The things that I think are already practical to implement are still on the ‘possible’ list for others.

If you’re reading this blog, you’re probably with me, but feel free to let others know that the things in my past I’m still happy to help with!  In any way: consulting or workshops or even speaking.  For instance, I’ll be talking engagement for the Guild at Learning Solutions, and in a webinar for AECT’s Learner Engagement group.  Just as I talk new things, like myths.  What goes around comes around, I guess, and what’s been possible is now practical.  Ask me how!

 

#AECT17 Conference Contributions

16 November 2017 by Clark 1 Comment

So, at the recent AECT 2017 conference, I participated in three ways that are worth noting.  I had the honor of participating in two sessions based upon writings I’d contributed, and one based upon my own cogitations. I thought I’d share the thinking.

For my own presentation, I shared my efforts to move ‘rapid elearning’ forward. I put Van Merrienboer’s 4 Component ID and Guy Wallace’s Lean ISD as a goal, but recognized the need for intermediate steps like Michael Allen’s SAM, David Merrill’s ‘Pebble in a Pond‘, and Cathy Moore’s Action Mapping. I suggested that these might be too far, and want steps that might be slight improvements on their existing processes. These included three thing: heuristics, tools, and collaboration. Here I was indicating specifics for each that could move from well-produced to well-designed.

In short, I suggest that while collaboration is good, many corporate situations want to minimize staff. Consequently, I suggest identifying those critical points where collaboration will be useful. Then, I suggest short cuts in processes to the full approach. So, for instance, when working with SMEs focus on decisions to keep the discussion away from unnecessary knowledge. Finally, I suggest the use of tools to support the gaps our brain architectures create.   Unfortunately, the audience was small (27 parallel sessions and at the end of the conference) so there wasn’t a lot of feedback. Still, I did have some good discussion with attendees.

Then, for one of the two participation session, the book I contributed to solicited a wide variety of position papers from respected ed tech individuals, and then solicited responses to same.  I had responded to a paper suggesting three trends in learning: a lifelong learning record system, a highly personalized learning environment, and expanded learner control of time, place and pace of instruction. To those 3 points I added two more: the integration of meta-learning skills and the breakdown of the barrier between formal learning and lifelong learning. I believe both are going to be important, the former because of the decreasing half-life of knowledge, the latter because of the ubiquity of technology.

Because the original author wasn‘t present, I was paired for discussion with another author who shares my passion for engaging learning, and that was the topic of our discussion table.  The format was fun; we were distributed in pairs around tables, and attendees chose where to sit. We had an eager group who were interested in games, and my colleague and I took turns answering and commenting on each other’s comments. It was a nice combination.  We talked about the processes for design, selling the concept, and more.

For the other participation session, the book was a series of monographs on important topics.  The discussion chose a subset of four topics: MOOCs, Social Media, Open Resources, and mLearning. I had written the mLearning chapter.  The chapter format included ‘take home’ lessons, and the editor wanted our presentations to focus on these. I posited the basic mindshifts necessary to take advantage of mlearning. These included five basic principles:

  1. mlearning is not just mobile elearning; mlearning is a wide variety of things.
  2. the focus should be on augmenting us, whether our formal learning, or via performance support, social, etc.
  3. the Least Assistance Principle, in focusing on the core stuff given the limited interface.
  4. leverage context, take advantage of the sensors and situation to minimize content and maximize opportunity.
  5. recognize that mobile is a platform, not a tactic or an app; once you ‘go mobile’, folks will want more.

The sessions were fun, and the feedback was valuable.

Why AR

13 September 2017 by Clark Leave a Comment

Perhaps inspired by Apple’s focus on Augmented Reality (AR), I thought I’d take a stab at conveying the types of things that could be done to support both learning and performance. I took a sample of some of my photos and marked them up.  I’m sure there’s lots more that  could be done (there were some great games), but I’m focusing on simple information that I would like to see. It’s mocked up (so the arrows are hand drawn), so understand I’m talking concept here, not execution!

Magnolia

Here, I’m starting small. This is a photo I took of a flower on a walk. This is the type of information I might want while viewing the flower through the screen (or glasses).  The system could tell me it’s a tree, not a bush, technically (thanks to my flora-wise better half).  It could also illustrate how large it is.  Finally, the view could indicate that what I’m viewing is a magnolia (which I wouldn’t have known), and show me off to the right the flower bud stage.

The point is that we can get information around the particular thing we’re viewing. I might not actually care about the flower bud, so that might be filtered out, and it might instead talk about any medicinal uses.  Also, it could be dynamic, animating the process of going from bud to flower and falling off. It could also talk about the types of animals (bees, hummingbirds, ?) that interact with it, and how. It would be dependent on what  I  want to learn.  And, perhaps, with some additional incidental information on the periphery of my interests, for serendipity.

Neighborhood viewGoing wider, here I’m looking out at a landscape, and the overlay is providing directions. Downtown is straight ahead, my house is over that ridge, and infamous Mt. Diablo is off to the left of the picture. It could do more, pointing out that the green ridges are grapes, provide the name of the neighborhood that’s in the foreground (I call it Stepford Downs, after the movie ;).

Dynamically, of course, if I moved the camera to the left, Mt. Diablo would get identified when it sprung into view.  As we moved around, we’d point to the neighboring towns in view, and in the direction of further towns blocked by mountain ranges.  We should or could also identify the river flowing past to the north.  And we could instead focus on other information: infrastructure (pipes and electricity), government boundaries, whatever’s relevant could be filtered in or out.

Road picAnd in this final example, taken from the car on a trip, AR might indicate some natural features. Here I’ve pointed to the clouds (and indicated the likelihood of rain). Similarly, I’ve identified the rock and the mechanism of shaping. (These are all made up, they could be wrong; Mt Faux definitely is!)  We might even be able to touch on a label and have it expand.

Similarly, as we moved, information would change as we viewed different areas. We might even animate what the area looked like hundreds of thousands of years ago and how it’s changed.  Or we could illustrate coming changes. It could instead show boundaries of counties or parks, types of animals, or other relevant information.

The point here is that annotating the world, a capability AR has, can be an amazing learning tool. If I can specify my interests, we can capitalize on them to develop. And this is as an adult. Think about doing this for kids, layering on information in their Zone of Proximal Development  and  interests!  I know VR’s cool, and has real learning potential, but there you have to  create the context. Here we’re taking advantage of it. That may be harder, but it’s going to have some real upsides when it can be done ubiquitously.

Augmented Reality Lives!

20 July 2017 by Clark Leave a Comment

Visually Augmented RealityAugmented Reality (AR) is on the upswing, and I think this is a good thing. I think AR makes sense, and it’s nice to see both solid tool support and real use cases emerging.  Here’s the news, but first, a brief overview of why I like AR.

As I’ve noted before, our brains are powerful, but flawed.  As with any architecture, any one choice will end up with tradeoffs. And we’ve traded off detail for pattern-matching.  And, technology is the opposite: it’s hard to get technology to do pattern matching, but it’s really good at rote. Together, they’re even more powerful. The goal is to most appropriately augment our intellect with technology to create a symbiosis where the whole is greater than the sum of the parts.

Which is why I like AR: it’s about annotating the world with information, which augments it to our benefit.  It’s contextual, that is, doing things  because  of when and where we are.  AR augments sensorily, either auditory or visual (or kinesthetic, e.g. vibration).  Auditory and kinesthetic annotation is relatively easy; devices generate sounds or vibrations (think GPS: “turn left here”).  Non-coordinated visual information, information that’s not overlaid visually, is presented as either graphics or text (think Yelp: maps and distances to nearby options).  Tools already exist to do this, e.g. ARIS.  However, arguably the most compelling and interesting is the aligned visuals.

Google Glass was a really interesting experiment, and it’s back.  The devices – glasses with camera and projector that can present information on the glass – were available, but didn’t do much because of where you were looking. There were generic heads-up displays and camera, but little alignment between what was seen and what was consequently presented to the user with additional information.  That’s changed. Google Glass has a new Enterprise Edition, and it’s being used to meet real needs and generate real outcomes. Glasses are supporting accurate placement in manufacturing situations requiring careful placement.  The necessary components and steps are being highlighted on screen, and reducing errors and speeding up outcomes.

And Apple has released it’s Augmented Reality software toolkit, ARKit, with features to make AR easy.  One interesting aspect is built-in machine learning, which could make aligning with objects in the world easy!  Incompatible platforms and standards impede progress, but with Google and Apple creating tools for each of their platforms, development can be accelerated. (I hope to find out more at the eLearning Guild’s Realities 360 conference.)

While I think Virtual Reality (VR) has an important role to play for deep learning, I think contextual support can be a great support for extending learning (particularly personalization), as well as performance support.  That’s why I’m excited about AR. My vision has been that we’ll have a personal coaching system that will know where and when we are and what our goals are, and be able to facilitate our learning and success. Tools like these will make it easier than ever.

FocusOn Learning reflections

27 June 2017 by Clark Leave a Comment

If you follow this blog (and you should :), it was pretty obvious that I was at the FocusOn Learning conference in San Diego last week (previous 2 posts were mindmaps of the keynotes). And it was fun as always.  Here are my reflections on what happened a bit more, as an exercise in meta-learning.

There were three themes to the conference: mobile, games, and video.  I’m pretty active in the first two (two books on the former, one on the latter), and the last is related to things I care and talk about.  The focus led to some interesting outcomes: some folks were very interested in just one of the topics, while others were looking a bit more broadly.  Whether that’s good or not depends on your perspective, I guess.

Mobile was present, happily, and continues to evolve.  People are still talking about courses on a phone, but more folks were talking about extending the learning.  Some of it was pretty dumb – just content or flash cards as learning augmentation – but there were interesting applications. Importantly, there was a growing awareness about performance support as a sensible approach.  It’s nice to see the field mature.

For games, there were positive and negative signs.  The good news is that games are being more fully understood in terms of their role in learning, e.g. deep practice.  The bad news is that there’s still a lot of interest in gamification without a concomitant awareness of the important distinctions. Tarting up drill-and-kill with PBL (points, badges, and leaderboards; the new acronym apparently)  isn’t worth significant interest!  We know how to drill things that must be, but our focus  should be on intrinsic interest.

As a side note, the demise of Flash has left us without a good game development environment. Flash is both a development environment and a delivery platform. As a development environment  Flash had a low learning threshold, and yet could be used to build complex games.  As a delivery platform, however, it’s woefully insecure (so much so that it’s been proscribed in most browsers). The fact that Adobe couldn’t be bothered to generate acceptable HTML5 out of the development environment, and let it languish, leaves the market open for another accessible tool. And Unity or Unreal provide good support (as I understand it), but still require coding.  So we’re not at an easily accessible place. Oh, for HyperCard!

Most of the video interest was either in technical issues (how to get quality and/or on the cheap), but a lot of interest was also in interactive video. I think branching video is a real powerful learning environment for contextualized decision making.  As a consequence the advent of tools that make it easier is to be lauded. An interesting session with the wise Joe Ganci (@elearningjoe) and a GoAnimate guy talked about when to use video versus animation, which largely seemed to reflect my view (confirmation bias ;) that it’s about whether you want more context (video) or concept (animation). Of course, it was also about the cost of production and the need for fidelity (video more than animation in both cases).

There was a lot of interest in VR, which crossed over between video and games.  Which is interesting because it’s not inherently tied to games or video!  In short,  it’s a delivery technology.  You can do branching scenarios, full game engine delivery, or just video in VR. The visuals can be generated as video or from digital models. There was some awareness, e.g. fun was made of the idea of presenting powerpoint in VR (just like 2nd Life ;).

I did an ecosystem presentation that contextualized all three (video, games, mobile) in the bigger picture, and also drew upon their cognitive and then L&D roles. I also deconstructed the game Fluxx (a really fun game with an interesting ‘twist’). Overall, it was a good conference (and nice to be in San Diego, one of my ‘homes’).

Some new elearning companies ;)

23 May 2017 by Clark 1 Comment

As I continue to track what’s happening, I get the opportunity to review a wide number of products and services. While tracking them all would be a full-time job, occasionally some offer new ideas.  Here’s a collection of those that have piqued my interest of late:

Sisters eLearning: these folks are taking a kinder, gentler  approach to their products and marketing their services.  Their signature offering is  a suite of templates for your elearning featuring cooperative play.  Their approach in their custom development is quiet and classy. This  is reflected in the way they  promote themselves at conferences: they all wear mauve  polos  and sing beautiful  a capella.  Instead of giveaways, they  quietly provide free home-baked mini-muffins for all.

Yalms: these folks are offering  the ‘post-LMS’. It’s not an LMS, and  instead offers course management, hosting, and tracking.  It addresses compliance, and checks a whole suite of boxes such as media portals, social, and many non-LMS things including xAPI. Don’t confuse them with an LMS; they’re beyond that!

MicroBrain: this company has developed a system that makes it easy to take  your existing courses and chunk  them  up into little bits. Then it pushes them out on a  schedule.  It’s a serendipity model, where there’s a chance it just might be the right bit at the right time, which is certainly better than your existing elearning. Most  importantly, it’s mobile!

OffDevPeeps: these folks a full suite of technology development services  including mobile, AR, VR, micro, macro, long, short, and anything else you want, all done at a competitive  cost. If you  are focused on the ‘fast’ and ‘cheap’ side of the trilogy, these are the folks to talk to. Coming soon to an inbox  near you!

DanceDanceLearn: provides a completely unique offering. They have developed an authoring tool that makes it easy for you to animate dancers moving in precise formations that spell out content. They also have a synchronized swimming version.  Your content can be even more engaging!

There, I hope you’ll find these of interest, and consider checking them out.

Any relation between the companies portrayed and real entities is purely coincidental.  #couldntstopmyself #allinfun

Designing Microlearning

10 May 2017 by Clark 6 Comments

Yesterday, I clarified what I meant about microlearning. Earlier, I wrote about designing microlearning, but what I was really talking about was the design of spaced learning. So how should you design the type of microlearning I really feel is valuable?

To set the stage, here’re we’re talking about layering learning on performance in a context. However, it’s more than just performance support. Performance support would be providing a set of steps (in whatever ways: series of static photos, video, etc) or supporting those steps (checklist, lookup table, etc).  And again, this is a good thing, but microlearning, I contend, is more.

To make it learning, what you really need is to support developing an ability to understand the rationale behind the steps, to support adapting the steps in different situations. Yes, you can do this in performance support as well, but here we’re talking about  models.  

What (causal) models give us is a way to explain what has happened, and predict what will happen.  When we make these available around performing a task, we unpack the rationale. We want to provide an understanding behind the rote steps, to support adaptation of the process in difference situations. We also provide a basis for regenerating missing steps.

Now, we can also be providing examples, e.g. how the model plays out in different contexts. If what the learner is doing now can change under certain circumstances, elaborating how the model guides  performing differently in different context provides the ability to transfer that understanding.

The design process, then, would be to identify the model guiding the performance (e..g.  why  we do things in this order, and it might be an interplay between structural constraints (we have to remove this screw first because…) and causal ones (this is the chemical that catalyzes the process).  We need to identify and determine how to represent.

Once we’ve identified the task, and the associated models, we  then need to make these available through the context. And here’s why I’m excited about augmented reality, it’s an obvious way to make the model visible. Quite simply, it can be layered  on top of the task itself!   Imagine that the workings behind what you’re doing are available if you want. That you can explore more as you wish, or not, and simply accept the magic ;).

The actual task  is the practice, but I’m suggesting providing a model explaining  why it’s done this way is the minimum, and providing examples for a representative sample of other appropriate contexts provides support when it’s a richer performance.  Delivered, to be clear, in the context itself. Still, this is what I think  really constitutes microlearning.  So what say you?

Clarifying Microlearning

9 May 2017 by Clark 5 Comments

I was honored to learn that a respected professor of educational technology liked my definition of micro-learning, such that he presented it as a recent conference.  He asked if I still agreed with it, and I looked back at what I’d written more recently. What I found was that I’d suggested some alternate interpretations, so I thought it worthwhile to be absolutely clear about it.

So, the definition he cited was:

Microlearning is a small, but complete, learning experience, layered on top of the task learners are engaged in, designed to help learners learn how to perform the task.

And I agree with this, with a caveat. In the article, I’d said that it could  also be a small complete learning experience, period. My clarification on this is that those are unlikely, and the definition he cited was the most likely, and likely most valuable.

So, I’ve subsequently said  (and elaborated on the necessary steps):

What I really think microlearning could and should be is for spaced learning.

Here I’m succumbing to the hype, and trying to put a positive spin on microlearning. Spaced learning is a good thing, it’s just not microlearning. And microlearning really isn’t helping them perform the task in  the moment (which is a good thing too), but instead leveraging that moment to also extend their understanding.

No, I like the original definition, where we layer learning on top of a task, leveraging the context and requiring the minimal content to take a task and make it a learning opportunity. That, too, is a good thing. At least I think so. What do you think?

Microdesign

14 March 2017 by Clark 3 Comments

There’s been a lot of talk about microlearning of late – definitions, calls for clarity, value propositions, etc – and I have to say that I’m afraid some of it (not what I’ve linked to) is a wee bit facile. Or, at least, conceptually unclear.  And I think that’s a problem. This came up again in a recent conversation, and I had a further thought (which of course I have to blog about ;).  It’s about how to do microdesign, that is,  how  to  design micro learning. And it’s not trivial.

VirusSo one of the common views of micro learning is that it’s just in time. That is, if you need to know how to do something, you look it up.  And that’s just fine (as I’ve recently ranted). But it’s not  learning. (In short: it’ll help you in the moment, but unless  you design it to support learning, it’s performance support instead).  You can call it Just In Time support, or microsupport,  but properly, it’s not micro learning.

The other notion is a learning that’s distributed over time. And that’s good.  But this takes a bit more thought. Think about it. If we want to systematically develop somebody over time, it’s not just a steady stream of ‘stuff’.  Ideally, it’s designed to optimally get there, minimizing the time taken on the part of the learner, and yet yield reliable improvements.  And  this is complex.

In principle, it should be a steady  development, that reactivates and extends learners capabilities in systematic ways. So, you still need your design steps, but you have to think about granularity, forgetting, reactivation, and development in a more fine-grained way.  What’s the minimum  launch?  Can you do ought but make sure there’s an initial intro, concept, example, and a first practice?  Then, how much do we need to reactivate versus how much do we have to expand the capability in each iteration? How much is enough?  As Will Thalheimer says in his spaced learning report,  the amount and duration of spacing depends on the complexity of the task and the frequency with which it’s performed.

When do you provide more practice, versus another example, versus a different model?  What’s the appropriate gap in complexity?  We’ll likely have to make our best guesses and tune, but we have to think consciously about it.  Just chunking up an existing course into smaller bits isn’t taking the decay of memory over time and the gradual expansion of capability. We have to design an experience!

Microlearning is the right thing to do, given our cognitive architecture. Only so much ‘strengthening’ of the links  can happen in any one day, so to develop a full new capability will take time. And that means small bits over time makes sense. But choosing the right bits, the right frequency, the right duration,  and the right ramp up in complexity, is non-trivial.  So let’s laud the movement, but not delude ourselves either that performance support  or a stream of content is learning. Learning, that is systematically changing the reliable behavior of the most complex thing in the known universe, is inherently complex.  We should take it seriously, and we can.

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

Blogroll

  • Charles Jennings
  • Christy Tucker
  • Connie Malamed
  • Dave's Whiteboard
  • Donald Clark's Plan B
  • Donald Taylor
  • Harold Jarche
  • Julie Dirksen
  • Kevin Thorn
  • Mark Britz
  • Mirjam Neelen & Paul Kirschner
  • Stephen Downes' Half an Hour

License

Previous Posts

  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.