Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Another Day Another Myth-Ridden Hype Piece

9 October 2018 by Clark 1 Comment

Some days, it feels like I’m playing whack-a-mole. I got an email blast from an org (need to unsubscribe) that included a link that just reeked of being a myth-ridden piece of hype.  So I clicked, and sure enough!  And, as part of my commitment to showing my thinking, I’m taking it down. I reckon it’s important to take these myths apart, to show the type of thinking we should avoid if not actively attack.  Let me know if you don’t think this is helpful.

The article starts by talking about millennials. That’s a problem right away, as millennials is an arbitrary grouping by birthdate, and therefore is inherently discriminatory. The boundaries are blurry, and most of the differences can be attributed to age, not generation. And that’s a continuum, not a group. As the data shows.  Millennials is a myth.

Ok, so they go on to say: “Changing the approach from adapting to Millennials to leveraging Millennials is the key…”  Ouch!  Maybe it’s just me, but while I like to leverage assets, I think saying that about people seems a bit rude.  Look, people are people!  You work with them, develop them, etc. Leverage them?  That sounds like you’re using them (in the derogatory sense).

They go on to talk about Learning Organizations, which I’m obviously a fan of.  And so the ability to continue to learn is important.  No argument. But why would that be specific to ‘millennials’?  Er…

Here’s another winner: “They natively understand the imperative of change and their clockspeed is already set for the accelerated learning this requires.”  This smacks of the ‘digital native’ myth.  Young people’s wetware isn’t any different than anyone else’s. They may be more comfortable with the technology, but making assumptions such as this undermines the fact that any one individual may not fit the group mean. And it’s demonstrable that their information skills aren’t any better because of their age.

We move on to 3 ways to leverage millennials:

  1. Create Cross-pollination through greater teamwork.  Yeah, this is a good strategy.  FOR EVERYONE. Why attribute it just to millennials?  Making diverse teams is just good strategy, period. Including diversity by age? Sure. By generation?  Hype. You see this  also with the ‘use games for learning’ argument for millennials. No, they’re just better learning designs! (Ok, with the caveat: if done well.)
  2. Establish a Feedback-Driven Culture to Learn and Grow Together. That’s a fabulous idea; we’re finding that moving to a coaching culture with meaningful assignments and quick feedback (not the quarterly or yearly) is valuable. We can correct course earlier, and people feel more enagaged. Again,  for everyone.
  3. Embrace a Trial-and-Error Approach to Learning to Drive Innovation. Ok, now here I think it’s going off the rails. I’m a fan of experimentation, but trial and error can be smart or random. Only one of those two makes sense. And, to be fair, they do argue for good experimentation in terms of rigor in capturing data and sharing lessons learned. It’s valuable, but again, why is this unique to millennials? It’s just a good practice for innovation.

They let us know there are 3 more ways they’ll share in their next post.  You can imagine my anticipation.  Hey, we can read  two  posts with myths, instead of just one.  Happy days!

Yes, do the right things (please), but  for the right reasons. You could be generous and suggest that they’re using millennials as a stealth tactic to sneak in messages about modern workplace learning.  I’m not, as they seem to suggest doing this largely with millennials. This sounds like hype written by a marketing person. And so, while I advocate the policies, I eschew the motivation, and therefore advise you to find better sources for your innovation practices. Let me know if this is helpful (or not ;).

Why Myths Matter

3 October 2018 by Clark 3 Comments

I’ve called out a number of myths (and superstitions, and misconceptions) in my latest tome, and I’m grateful people appear to be interested.  I take this as a sign that folks are beginning to really pay attention to things like good learning design. And that’s important. It’s also  important not to minimize the problems myths can create. I do that in my presentations, but I want to go a bit deeper.  We need to care about why myths matter to limit our mistakes!

It’s easy to think something like “they’re wrong, but surely they’re harmless”.  What can a few misguided intentions matter?  Can it hurt if people are helped to understand if people are different?  Won’t it draw attention to important things like caring for our learners?  Isn’t it good if people are more open-minded?

Would that this were true. However, let me spin it another way: does it matter if we invest in things that don’t have an impact?  Yes, for two reasons.  One, we’re wasting time and money. We will pay for workshops and spend time ensuring our designs have coverage for things that aren’t really worthwhile. And that’s both profligate and unprofessional.  Worse, we’re also not investing in things that might actually matter.  Like, say,  Serious eLearning. That is, research-derived principles about what  actually works. Which is what we should be getting dizzy about.

But there are worse consequences. For one, we could be undermining our own design efforts. Some of these myths may have us do things that undermine the effectiveness of our work. If we work too hard to accommodate non-existent ‘styles’, for instance, we might use media inappropriately. More problematic, we could be limiting our learners. Many of the myths want to categorize folks: styles, gender, left/right brain, age, etc.  And, it’s true, being aware of how diversity strengthens is important. But too often people go beyond; they’ll say “you’re an XYZ”, and people will self-categorize and consequently self-limit.  We could cause people not to tap into their own richness.

That’s still not the worst thing. One thing that most such instruments explicitly eschew is being used as a filter: hire/fire, or job role. And yet it’s being done. In many ways!  This means that you might be limiting your organization’s diversity. You might also be discriminatory in a totally unjustifiable way!

Myths are not just wasteful, they’re harmful. And that matters.  Please join me in campaigning for legitimate science in our profession. And let’s chase out the snake oil.  Please.

Where’s Clark? Fall 2018/Spring 2019 Events Schedule

2 October 2018 by Clark Leave a Comment

Here’re the events where I’ll be through the last quarter of this year, and into the next. Of course, you can always find out what’s up at the Quinnovation News page… But this is a more likely place for you to start unless you’re looking to talk to me about work.  I hope to see you, virtually or in person, at one of these!

The week of October 22-26, Clark will be speaking (the same week!) at DevLearn on measurement and eLearning science, and at AECT on meta-learning architecture. (Yeah, both in one week…long story.)

On Litmos’ Live Virtual Summit on 7-8 November, Clark will talk Learning Experience. Stay tuned!

Clark will be a guest on Relate’s eLearnChat on 15 Nov.

2019

On the 9th of January, Clark will present The Myths that Plague Us as a webinar for HRDQ-U.

Clark will be presenting in the Modern Workplace Learning track at the LearnTec conference in Karlsruhe, Germany that runs 29-31 January.

Feb 25-27, Clark will serve as host of the Strategy Track at Training Magazine’s annual conference, opening with an overview and closing with a strategy-development session.

Clark will speak to the Charlotte Chapter of ISPI on the Performance Ecosystem on March 14.

At the eLearning Guild’s Learning Solutions conference March 25-28, Clark will be presenting a Learning Experience Design workshop, where we’ll go deep on integrating learning science and engagement.

If you’re at one of these events, please do introduce yourself and say hello (I’m not aloof, I’m just shy; er, ok, at least ’til we get to know one another :).

ONE level of exaggeration

26 September 2018 by Clark 5 Comments

I’ve argued before that we should be thinking about exaggeration in our learning design. And I’ve noticed that it’s a dramatic trick in popular media. But you can easily think of ways it can go wrong. So what would be appropriate exaggeration?

When I look at movies and other story-telling media (comics), the exaggeration  usually is one level.  You know, it’s like real life but some aspect is taken beyond what’s typical. So, more extreme events happen: the whacky neighbor is  maniacal, or the money problems are  potentially fatal, or the unlikely events on a trip are just more extreme.  And this works; real life is mundane, but you go too far and it treads past the line of believability. So there’s a fine line there.

Now, when we’re actually performing, whether with customers or developing a solution, it matters. It’s our  job after all, and people are counting on us.  There’s plenty of stress, because there are probably not enough time, and too much work, and…

However, in the learning situation, you’re just mimicking the real world. It’s hard to mimic the stress that comes from real life. So, I’m arguing, we should be bringing in the extra pressure through the story. Exaggerate!  You’re not just helping a customer, you’re helping the foreign ambassador’s daughter, and international relations are at stake!  Or the person you’re sweet on (or the father of said person) is watching!  This is the chance to have fun and be creative!

Now, you can’t exaggerate everything. You could add extraneous cognitive load in terms of processing if you make it too complex in the details. And you definitely don’t want to change the inherent decisions in the task and decrease the relevance of the learning. To me, it’s about increasing the meaning of the decisions, without affecting their nature. Which may require a bit of interpretation, but I think it’s manageable.

At core, I don’t think I’m exaggerating when I say exaggeration is one of your tools to enhance engagement  and effectiveness. The closer we bring the learning situation to the performance situation, the higher the transfer. And if we increase the meaningfulness of the learning context to match the performance context, even if the details are more dissimilar, I think it’s an effective tradeoff. What do  you think?

Wise technology?

25 September 2018 by Clark Leave a Comment

At a recent event, they were talking about AI (artificial intelligence) and DI (decision intelligence). And, of course, I didn’t know what the latter was so it was of interest. The description mentioned visualizations, so I was prepared to ask about the limits, but the talk ended up being more about decisions (a topic I  am interested in) and values. Which was an intriguing twist. And this, not surprisingly led me back to wisdom.

The initial discussion talked about using technology to assist decisions (c.f. AI), but I didn’t really comprehend the discussion around decision intelligence. A presentation on DA, decision analysis, however, piqued my interest. In it, a guy who’d done his PhD thesis on decision making talked about how when you evaluate the outputs of decisions, to determine whether the outcome was good, you needed values.

Now this to me ties very closely back to the Sternberg model of wisdom. There, you evaluate both short- and long-term implications, not just for you and those close to you but more broadly, and with an  explicit  consideration of values.

A conversation after the event formally concluded cleared up the DI issue. It apparently is not training up one big machine learning network to make a decision, but instead having the disparate components of the decision modeled separately and linking them together conceptually. In short, DI is about knowing what makes a good decision and using it. That is, being very clear on the decision making framework to optimize the likelihood that the outcome is right.

And, of course, you analyze the decision afterward to evaluate the outcomes. You do the best you can with DI, and then determine whether it was right with DA. Ok, I can go with that.

What intrigues me, of course, is how we might use technology here.  We can provide guidelines about good decisions, provide support through the process, etc. And, if we we want to move from smart to  wise decisions, we bring in values explicitly, as well as long-term and broad impacts. (There was an interesting diagram where the short term result was good but the long term wasn’t, it was the ‘lobster claw’.)

What would be the outcome of wiser decisions?  I reckon in the long term, we’d do better for all of us. Transparency helps, seeing the values, but we’d like to see the rationale too. I’ll suggest we can, and should, be building in support for making wiser decisions. Does that sound wise to you?

Example Diagram

19 September 2018 by Clark Leave a Comment

No, not a diagram that’s an example, a diagram about examples!  I created this because I needed a diagram to represent examples. I’ve written about them, and I have diagrams for other components of learning like models. However, I wanted to capture some important points about examples. So here we go.

Example elements

The idea here is that an example should be a story, with narrative flow. You start with a problem, and flow through the process to the outcome.

One of the important elements along the way is showing the steps  and the  underlying thinking. Experts may be saying “you do this, then this” but what they’re not articulating is important to. It’s more like “I could’ve done this  or this, but because of this…” and that needs to be heard.

Even better if a mistake was made, caught, and remedied. Showing that, and how, you monitor performance as you go is important for learners to see. That’s not illustrated here, because it  is optional.

What is captured here is that there is (or should be) a conceptual model guiding your performance, and that should be explicitly referenced in the thinking. It should show how the model was instantiated because of the context, and how it led to the outcome.

These, I argue, are important points about examples that are reflected in the work of Schoenfeld as captured in Cognitive Apprenticeship (by Collins & Brown). Making thinking visible is an important component of learning whether classroom or workplace. So, have I shown  my thinking?

Post popularity?

18 September 2018 by Clark 1 Comment

My colleague, Will Thalheimer, asked what posts were most popular (if you blog, you can participate too).  For complicated reasons, I don’t have Google Analytics running.  However, I found I have a WordPress plugin called Page Views. It helpfully can list my posts by number of guest views.  I was surprised by the winner (and less so by the runner up). So it makes me wonder what leads to post popularity.

The winner was a post titled  New Curricula?  In it, I quote a message from a discussion that called for meta-cognitive and leadership skills, and briefly made the case to support the idea.  I certainly don’t think it was one of my most eloquent calls for this. Though, of course, I do believe in it.  So why?  I have to admit I’m inclined to believe that folks, searching on the term, came to this post rather than it was so important on it’s own merits.

Which isn’t the case with the post that had the second most views.  This one, titled  Stop creating, selling, and buying garbage!, was a rant about our industry. And this one, I believe, was popular because it could be viewed as controversial, or at least, a strong opinion.  I was trying to explain why we have so much bad elearning (c.f. the  Serious eLearning Manifesto), and talking about various stakeholders and their hand in perpetuating the sorry state of affairs.

Interestingly, I won an award last year for my post on AR (yes, I was on the committee, but we didn’t review our own).  And, I was somewhat flummoxed on that one too. Not that there weren’t good thoughts in it, but it was pretty simple in the mechanism: I (digitally) drew on some photos!  Yet clearly that made something concrete that folks had wondered about.

Of course, I think there’s also some luck or fate in it as well. Certainly, the posts I think are most interesting aren’t the ones others perceive.  But then, I’m biased. And perhaps some are used in a class so you get a number of people pointed to it or something. I really have no way to know.  I note that the posts here at Learnlets are more unformed thoughts, and my attempts at more definitive thoughts appear at the Litmos blog and now at my Quinnsights columns at Learning Solutions.

I’ll be interested in Will’s results (regardless of whether my data makes it in, because without analytics I couldn’t answer some of his questions).  And, of course, I welcome any thoughts you have about what makes a post popular (beyond SEO :), and/or what you’d  like to read!

Labels and roles

12 September 2018 by Clark Leave a Comment

I was just reflecting on the different job labels there are. Some of the labels are trendy, some are indicative, but there’s potentially a lot of overlap. I’m not sure what to do about it, but I thought I’d explore it.  So this is a bit of an unconstructed thought…

To start  somewhere, let’s start with Learning Architect. This is an interesting one (and one I just chose on a project). It leverages the metaphor of the relationship between an architect and the contractor who builds it. The architect imagines the flow of people, places of rest, and creatively evaluates how to match the requirements with the available space (and budget). Then someone else builds it. This is similar to a learning designer, who envisions a learning experience via a storyboard (mapping to a blueprint), before handing off to a developer.

So what is a learning experience designer?  Here is someone envisioning the cognitive (and aesthetic) flow the learner will go through.  It’s looking at addressing the change in knowledge and emotions, as a user experience designer might for an interface.  Whether they build it or not implies they’re a learning experience developer instead/in addition.

Right now I see both as equivalent. An architect is developing the flow of people and their emotions in the space. Where do you want them active, and where do you want them reflective?  The learning experience designer similarly. Are they just different cuts on the same role? I note that in the 70:20:10 process of Arets, Jennings, and Heijnen, learning architect is a role that sits between doing the analysis and implementing the solution.

I also have heard of a learning  strategist.  This could be the same, coming up with a series of tactics to transform the learner into someone with new capabilities.  Or this could be a meta-level, a role I frequently play, reviewing the design process for changes that can maximize the outcomes with a minimum of disruption.

Then there’s learning  engineering, which is in the process of being defined by a committee.  It not only includes the learning science of design, but the technical implementation. Certainly architects and designers need to be aware of the tech, not stipulating the impossible, but this role goes deeper, on to systems integration and more.

Of course, we have the traditional instructional designer, which captures the notion of facilitated learning, but not the integration of the aesthetic component.  And, on the whole, I’m avoiding the ‘developer’ label, as the people who take storyboard to an realized experience.  There are clearly people who have to straddle both (I recently asked an audience how many were sole practitioners in this sense, and a majority seemed to have to design and develop (presumably in the tools that support that).

All these labels may reflect how an organization is dividing up the whole process. I’m not even certain that the way I’ve characterized them is accurate.  What labels am I missing? What nuances?  Does this make sense?

Revisiting personal learning

11 September 2018 by Clark 2 Comments

A number of years ago, I tried a stab at an innovation process. And I was reminded of it thinking about personal learning, and looked at it again. And it doesn’t seem to aged well. So I thought I’d revisit the model, and see what emerged. So here’s a mindmap of personal learning, and the associated thinking.

The earlier 5R model was based on Harold Jarche’s Seek-Sense-Share model, a deep model that has many rich aspects. I had reservations about the labels, and I think it’s sparse at either end.  (And, I worked to hard to try to keep it to ‘R’s, and  Reify just doesn’t work for me. ;)

Personal learning

In this new approach, I have a richer representation at either end. My notion of ‘seek’ (yes, I’m still using Harold’s framework, more at the end) has three different aspects. First is ‘info flows’. This is setting up the streams you will monitor. They’re filters on the overwhelming overload of info available. They’re your antenna for resonating with interesting new bits. You can also search for information, using DuckDuckGo or Google, or going straight to Wikipedia or other appropriate information resources you know. And, of course, you can ask, using your network, or Quora, or in any social media platform like LinkedIn or Facebook.  And there’re are different details in each.

To make sense of the information, you can do either or both of representing your understanding and experimenting. Representing is a valuable way to process what you’re hearing, to make it concrete. Experimenting is putting it to the test. And you naturally do both; for instance read a web page telling you how to do something that’s new, and you put it into practice and see if it works. Both require reflection, but getting concrete in trying it out or rendering it is valuable. Again, representing and experimenting break down into further details.

What you learn can (and often should) be shared. At whatever stage you’re at, there’s probably someone who would benefit from what you’ve learned.  You can post it publicly (like this blog), or circulate it to a well-selected set of individuals (and that can range from one other person to a small group or some channel that’s limited).  Or you can merely have it in readiness so that if someone asks, you can point them to your thoughts. Which is different than pointing them to some other resource, which is useful, but not necessarily learning. The point is to have others providing feedback on where you’re at.

I looked at Harold’s model more deeply after I did this exercise (a meta-learning on it’s own; take your own stab and then see what others have done).  I realize mine is done on sort of a first-principles basis from a cognitive perspective, while his is richer, being grounded in others’ frameworks. Harold’s is also more tested, having been used extensively in his well-regarded workshop.

I note that part of the meta-learning here is the ongoing monitoring of your own processes (the starred grey clouds). This is a key part of Harold’s workshop, by the way. Looking at your processes and evaluating them. An early exercise where you evaluate your own network systematically, for instance, struck me as really insightful. I’m grateful he was willing to share his materials with me.

So, this has been my sensing and sharing, so I hope you’ll take the opportunity to provide feedback!  What am I missing?

 

 

Translational research?

6 September 2018 by Clark 2 Comments

I came across the phrase “Translational Behavior-Analysis”. I had no idea what that was, so I looked it up.  And I found the answer interesting. The premise is that this is an intermediary between academic and applied work.  Which I think of as a good thing, but is it really a  thing? Does it make sense?  I have mixed feelings, so I thought I’d lay them out.

So, one of the things that a few people do is translate research to practice. I’m thinking of folks who are quite explicit about it like Will Thalheimer, Patti Schank, Julie Dirksen, Ruth Clark, and Mirjam Neelen, amongst others.  They’re also practitioners, designing or consulting on solutions, but they can read research in untranslated academese and make sense of it. So is this that?

One definition I found said: “the process of applying ideas, insights, and discoveries generated through basic scientific inquiry to” <applied discipline>.  This is big in medical circles, apparently.  And that’s a good thing, I hope you’d agree.  However, they also say “occupies a conceptual space between basic research and applied research”.  Wait, I thought that  was applied research!

Ok, so further research found this gem: “Applied research is any research that may possibly be useful for enhancing health or well-being. It does not necessarily have to have any effort connected with it to take the research to a practical level.”  Ah, so we can do things in applied research that we think might be good, even if it isn’t connected to basic research. Well, then.  When I think of applied cognition, which has showed up in interface design (and I try to push in learning experience design), I think of that as doing what they call translational, but perhaps it’s not that way in other fields.

Ultimately, this was about fast-tracking medical research into changing people’s lives. And that’s a good thing. And I think our ‘interpreters’ are indeed serving to help take academic research and fast-track it into our learning designs. Will has called himself a ‘translator’ and that’s a good thing.

We also need a way for our own innovations, for instance taking agile software development and applying it to learning design, to filter back to academia and get perhaps a rigorous test. There are people experimenting with VR and other technologies, for instance, and some of the experimentation is “why not this” instead of “theory suggests that”. And both are good.  We may need translators both ways, and I think the channel back to academia is a bit weak, at least in learning and technology. Happy to be wrong about that, by the way!

I’m mindful that we have to be careful about bandwagons. There’s a lot of smoke and hype that makes it easy to distract from fundamentals that we’re still not getting right.  And I’m not sure whether applied or transformational is the right label, but it is an important  role.  I guess I still think that a tight coupling between basic and applied implies translational (I like Reeves’ Design Research as a bridge,  myself), but I’m happy to accept more nuanced views.  How about you?

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok