Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Search Results for: engag

Monday Broken ID Series: Perfect Practice

1 March 2009 by Clark 1 Comment

Previous Series Post | Next Series Post
This is one in a series of thoughts on some broken areas of ID that I‘m posting for Mondays.   I intend to provide insight into many ways much of instructional design fails, and some pointers to avoid the problems. The point is not to say ‘bad designer‘, but instead to point out how to do good design.

Really, the key to learning is the practice. Learners have to apply knowledge, in the form of skills, to really internalize and ‘own‘ the learning.   Knowledge recitation, in the absence of application, leads to what cognitive science calls ‘inert knowledge‘, that‘s able to be recited back, but isn‘t activated in appropriate contexts.

What we see, unfortunately, is too much of knowledge test, and not meaningful application. We see meaningless questions seeing if people can recite back memorized facts, and no application of those facts to solve problems.   We see alternatives to the right answer that are so obviously wrong that we can pass the test without learning anything!   And we see feedback that‘s not specific to the deficit.   In short, we waste our and the learner‘s time.
What we want is appropriate challenge, contextualized performance, meaningful tasks, appropriate feedback, and more.

First, we should have picked meaningful objectives that indicate what they can do, in what context, to what level, and now we design the practice to determine whether they can do it. Of course, we may need to have some intermediate tasks to develop their skills at an appropriate pace, providing scaffolding to simplify the task until it‘s mastered.

We can scaffold in a variety of ways. We can provide tasks with simplified data first, that don‘t get complicated with other factors.   We can provide problems with parts worked, so learners can accomplish the component skills separately and then combine. We can provide support tools such as checklists or flowcharts to assist, and gradually remove them until the learner is capable.

We do need to balance the level of challenge, so that the task gets difficult at the right rate for the learner: too easy, and the learner is bored; too hard and the learner is frustrated.   Don‘t make it too easy!   If it matters, ensure they know it (and if it doesn‘t, why are you bothering?).

The trick is not only the inherent nature of the task, but many times is a factor of the alternatives to the right answer.   Learners don‘t make random mistakes (generally), they make patterned mistakes that represent inappropriate models that they perceive as appropriate.   We should choose alternatives to the right answer or choice that represent these misconceptions.

Consequently, we need to provide specific feedback for that particular misconception.   That‘s why any quiz tool that only has one response for all the wrong answers should be tossed out; it‘s worthless.

We need to ensure that the setting for the task is of interest to the learner.   The contexts we choose should setup problems that the learner viscerally understands are important problems, and ones that they are interested in.
We also need, as mentioned with examples, that the contexts seen across both examples and practice determine the space of transfer, so that still needs to be kept in mind.

The elements listed here are the elements that make effective practice, but also those that make engaging experiences (hence, the book).   That is, games.   While the best practice is individually mentored real performance, that doesn‘t scale well, and the consequences can be costly.   The next best practice, I argue, is simulated performance, tuned into a game (not turned, tuned).   While model-driven simulations are ideal for a variety of reasons (essentially infinite replay, novelty, adaptive challenge), it can be simplified to branching or linear scenarios.   If nothing else, just write better multiple choice questions!

Note that, here, practice encompasses formative and summative assessment. In either case, the learner‘s performing, it‘s just whether you evaluate and record that performance to determine what the learner is capable of.   I reckon assessment should always be formative, helping the learner understand what they know. And summative assessment, in my mind, has to be tied back to the learning objectives , seeing if they can now do what they need to be able to do that‘s difference.

If you make meaningful challenging, contextualized performance, you make effective practice.   And that‘s key to behavior change, and learning.   So practice making perfect practice, because practice makes perfect.

Designing Learning

28 February 2009 by Clark 2 Comments

Another way to think about what I was talking about yesterday in revisiting the training department is taking a broader view.   I was thinking about it as Learning Design, a view that incorporates instructional design, information design and experience design.

leiI‘m leery of the term instructional design, as that label has been tarnished with too many cookie cutter examples and rote approaches to make me feel comfortable (see my Broken ID series).   However, real instructional design theory (particularly when it‘s cognitive-, social-, and constructivist-aware) is great stuff (e.g. Merrill, Reigeluth, Keller, et al); it‘s just that most of it‘s been neutered in interpretation.   The point being, really understanding how people learn is critical.   And that includes Cross‘ informal learning.   We need to go beyond just the formal courses, and provide ways for people to self-help, and group-help.

However, it‘s not enough.   There‘s also understanding information design.   Now, instructional designers who really know what they‘re doing will say, yes, we take a step back and look at the larger picture, and sometimes it‘s job aids, not courses.   But I mean more, here.   I‘m talking about, when you do sites, job aids, or more, including the information architecture, information mapping, visual design, and more, to really communicate, and support the need to navigate. I see reasonable instructional design undone by bad interface design (and, of course, vice-versa).

Now, how much would you pay for that? But wait, there‘s more!   A third component   is the experience design.   That is, viewing it not from a skill-transferral perspective, but instead from the emotional view.   Is the learner engaged, motivated, challenged, and left leaving fulfilled?   I reckon that‘s largely ignored, yet myriad evidence is pointing us to the realization that the emotional connection matters.

We want to integrate the above.   Putting a different spin on it, it‘s about the intersection of the cognitive, affective, conative, and social components of facilitating organizational performance.   We want the least we can to achieve that, and we want to support working alone and together.

There‘s both a top-down and bottom-up component to this.   At the bottom, we‘re analyzing how to meet learner needs, whether it‘s fully wrapped with motivation, or just the necessary information, or providing the opportunity to work with others to answer the question.   It‘s about infusing our design approaches with a richer picture, respecting our learner‘s time, interests, and needs.

At the top, however, it‘s looking at an organizational structure that supports people and leverages technology to optimize the ability of the individuals and groups to execute against the vision and mission.   From this perspective, it‘s about learning/performance, technology, and business.

And it‘s likely not something you can, or should, do on your own.   It‘s too hard to be objective when you‘re in the middle of it, and the breadth of knowledge to be brought to bear is far-reaching.   As I said yesterday, what I reckon is needed is a major revisit of the organizational approach to learning.   With partners we‘ve been seeing it, and doing it, but we reckon there‘s more that needs to be done.   Are you ready to step up to the plate and redesign your learning?

Revisiting the Training Department

27 February 2009 by Clark 1 Comment

Harold Jarche and Jay Cross have been talking about rethinking the training department, and I have to agree.   In principle, if there is a ‘training‘ department, it needs to be coupled with a ‘performance‘ department and a ‘social learning‘ department, all under an organizational learning & performance umbrella.

What‘s wrong with a training department?   Several things you‘ll probably recognize: all problems have one answer – ‘a course‘; no relationships to the groups providing the myriad of portals, no relationship to anyone doing any sort of social learning, no ‘big picture‘ comprehension of the organization‘s needs, and typically the courses aren‘t that great either!

To put it another way, it‘s not working for the organizational constituencies.   The novices aren‘t being served because the courses are too focused on knowledge and not skills, aren‘t sufficiently motivating to engage them, and use courses even when job aids would do.   The practitioners are not getting or able to find the information they need, and have trouble getting access to expert knowledge.   And experts aren‘t able to collaborate with each other, and to work effectively with practitioners to solve problems.   Epic fail, as they say.   OK, so that‘s a ‘straw man‘, but I‘ll suggest that it‘s all too frequent.

The goal is a team serving the entire learnscape: looking at it holistically, matching needs to tools, nurturing communities, leveraging content overlap, and creating a performance-focused ecosystem.   I‘ve argued before that such an approach is really the only sustainable way to support an organization.   However, that‘s typically not what we see.

Instead, we tend to see different training groups making courses in their silos, with no links between their content (despite the natural relationships), often no link to content in portals, no systematic support for collaboration, and overall no focus on long-term development of individuals and capabilities.

So, how do we get there from here?   That‘s not an easy answer, because (and this isn‘t just consultant-speak) it depends on where the particular organization is at, and what makes sense as a particular end version, and what metrics are meaningful to the organization.   There are systematic ways to assess an organization (Jay, Harold, and I‘ve drafted just such an instrument), and processes to follow to come up with recommendations for what you do tomorrow, next month, and next year.

The goal should be a plan, a strategy, to move towards the goal.   The path differs, as the starting points are organization-specific. One way to do it is DIY, if you‘ve got the time; it‘s cheaper, but more error-prone.   The fast track is to bring in assistance and take advantage of a high-value, lightweight infusion of the best thinking to set the course.   No points for guessing my recommendation.   But with the economic crisis and organizational ‘efficiencies‘, can you afford to stick to the old ineffective path?

Monday Broken ID Series: Examples

22 February 2009 by Clark 2 Comments

Previous Series Post |Next Series Post

This is one in a series of thoughts on some broken areas of ID that I‘m posting for Mondays.   I intend to provide insight into many ways much of instructional design fails, and some pointers to avoid the problems. The point is not to say ‘bad designer‘, but instead to point out how to do good design.

I see several reliable problems with examples, and they aren‘t even the deepest problems. They tend to be mixed in with the concept, instead of separate, if they exist at all.   Then, when they do exist, too often they‘re cookie-cutter examples, that don‘t delve into the necessary elements that make examples successful, let alone are intrinsically interesting, yet we know what these elements are!

Conceptually, examples are applications of the concept in a context.   That is, we have a problem in a particular setting, and we want to use the model as a guide to solving the problem. Note that the choice of examples is important. The broader the transfer space, that is, the more general the skills, the more you want examples that differ in many respects.   Learners generalize the concept from the examples, and the extent to which they‘ll generalize to all appropriate situations depends on the breadth of contexts they‘ve seen (across both examples and practice).   You need to ensure that the contexts the learner sees are as broadly disparate as possible.

Note that we should also be choosing problems and contexts that are of interest to the audience.   Going beyond just the cognitive role, we should be trying to tap into the motivational and engagement factors.   Factor that into the example design as well!

Now, we know that examples have to show the steps that were taken.   They have to have specific steps from beginning to end.   And, I add, those steps have to refer back to the concept that guides the presentation.   You can‘t just say “first you do this, then you do this”, etc, you have to say “first, using the model, you do this, and then the model says to do that”.   You need to show the steps, and the intermediate work products.   Annotating them is really important.

And that annotation is not just the steps, but also the underlying thought processes.   The problem is, experts don‘t even have access to their thought processes anymore!   Yet, their thinking really works along lines like “well, I could‘ve done A, but because of X, and thought B was a better approach, and then I could do C, but because of Y I tried D”, etc.   The point being, there‘s a lot of contextual clues that they evaluate that aren‘t even conscious, yet these clues are really important for learners. (BTW, this is one of the many reasons I recommend comics in elearning, thought bubbles are great for cognitive annotation.)

Another valuable component is showing mistakes and backtracking. This is a hard one to get your mind around, and yet it‘s powerful both cognitively and emotionally.   First, experts model the behavior perfectly, and when learners try, they make mistakes, and may turn off emotionally (“I‘m having trouble, and it looks so easy, I must not be good at this”).   In reality, experts make mistakes all the time, and learners need to know that. It keeps you from losing them altogether!

Cognitively it‘s valuable, too.   When experts show backtracking and repair, they‘re modeling the meta-skills that are part of the expertise.   Unpacking that self-monitoring helps learners internalize the ‘check your answer‘ component that‘s part of expert performance.   This takes more work on the part of the designer, like we had with the concept, but if the content is important (otherwise, why are you building a course), it‘s worth doing right.

Finally, I believe it‘s important to convey the example as a story.   Our brains are wired to comprehend stories, and a good narrative has better uptake.   Having a protagonist documenting the context and problem, and then solving it with the model to achieve meaningful outcomes, is more interesting, and consequently more memorable.   We can use a variety of media to tell stories, from prose, through audio (think mobile and podcasts) and narrated slideshow, animation, or video.   Comics are another channel.   Stories also are useful for conveying the underlying thought processes, via thought bubbles or reflective narration (“What was I thinking?…”).

So, please do good examples.   Be exemplary!

Strategy, strategically

21 February 2009 by Clark Leave a Comment

In addition to working on the technology plan for my school district, I’ve also been assisting a not-for-profit trying to get strategic about technology.   The struggles are instructive, but looking across these two separate instances as well as the previous organizations I’ve assisted, I’m realizing that there are some common barriers.

The obvious one is time. The old saying about alligators and draining the swamp is too true, and it’s only getting worse.   Despite an economic stimulus package for the US and other countries, and (finally) a budget in my own state, things are not likely to get better soon.   Even if companies could hire back everyone they’ve laid off, the transition time would be significant.   It’s hard to sit back and reflect when you’re tackling more work with less resources.   Yet, we must.

The second part is more problematic.   Strategic thinking isn’t easy or obvious, at least to all.   For some it’s probably in their nature, but I reckon for most it takes a breadth of experience and an ability to abstract from that experience to take a broader perspective.   Abstraction, I know from my PhD research on analogy, isn’t well done without support.   Aligning that perspective with organizational goals simultaneously adds to the task.   Doing it keeping both short- and long-term values, for several different layers of stakeholders, and you’re talking some serious cognitive overhead.

We do need to take the time to be strategic.   As I was just explaining on a call, you don’t want to be taking small steps that aren’t working together towards a longer-term goal.   If you’re investing in X, and Y, and Z, and each one doesn’t build on each other, you’re missing an opportunity. If you’ve alternatives A & B, and A seems more expedient, if you haven’t looked to the future you might miss that B is a better long term investment.   If you don’t evaluate what else is going on, and leverage those initiatives because you’re just meeting your immediate needs, you’re not making the best investment for the organization, and putting yourself at risk.   You need to find a way to address the strategic position, at least for a percentage of your time (and that percentage goes up with your level in the organization).

To cope, we use frameworks and tools to help reduce the load, and follow processes to support systematicity and thoroughness. The performance ecosystem framework is one specific to use of technology to improve organizational learning, innovation, and problem-solving, but there are others.   Sometimes we bring in outside expertise to help, as we may be too tightly bound to the context and an external perspective can be more objective.

You can totally outsource it, to a big consulting company, but I reckon that the principle of ‘least assistance‘ holds here too.     You want to bring in top thinking in a lightweight way, rather than ending up with a bunch of interns trying to tie themselves to you at the wrist and ankles.   What can you do that will provide just the amount of help you need to make progress?   I have found that a lightweight approach can work in engagements with clients, so I know it can be done.   Regardless, however of wWhether you do it yourself, with partners, or bring in outside help, don’t abandon the forest for the trees, do take the time.   You need to be strategic, so be strategic about it!

Less than words

22 January 2009 by Clark 8 Comments

Yesterday, while I was posting on how words could be transcended by presentation, there was an ongoing twitfest on terms that have become overused and, consequently, meaningless.   It started when Jane Bozarth asked what ‘instructionally sound’ meant, then Cammy Bean chimed in with ‘rich’, Steve Sorden added ‘robust’, and it went downhill from there.

I responded to Jane’s initial query that instructionally sound cynically meant following the ID cookie cutter, but ideally meant following what’s known about how people learn.   I similarly tried to distinguish the hyped version of engaging (gratuitous media use) from a more principled one (challenging, contextualized, meaningful, etc).   (I had to do the latter, given I’ve got the word engaging in my book title.)

Other overused terms mentioned include: adaptive, brain-based. game-like, comprehensive, interactive, compelling, & robust.   Yet, behind most of these are important concepts (ok, game-like is hype, and Daniel Willingham’s put a bucket of cold water on brain-based).   I should’ve added ‘personalized’ when a demo of an elearning authoring suite I sat through yesterday could capture the learner’s name and use it to print a ‘personalized’ certificate at the end.

And that’s the problem: important concepts are co-opted for marketing by using the most trivially qualifying meaning of the term to justify touting it as an instance.   Similarly, clicking to move on is, apparently, interactive.   Ahem.   It’s like the marketers don’t want to give us any credit for having a brain. (Though, sadly, from what I see, there does seem to be some lack of awareness of the deeper principles behind learning.)   I invoke the Cluetrain, and ask elearning vendors to get on board.

So, before you listen to the next pitch from a vendor, get your Official eLearning Buzzword Bingoâ„¢ card, make sure you know what the terms mean, and challenge them to ensure that they a) really understand the concept, and b) really have the capability.   You win when you catch them out; a smarter market is a better market. Ok, let’s play!

Predictions for 2009

30 December 2008 by Clark 13 Comments

Over at eLearn Magazine, Lisa Neal Gualtieri gets elearning predictions for 2009, and they’re reliably interesting. Here’re mine:

The ordinary: Mobile will emerge, not as a major upheaval, but quietly infiltrating our learning experiences. We‘ll see more use of games (er, Immersive Learning Simulations) as a powerful learning opportunity, and tools to make it easier to develop. Social networking will become the ‘go to‘ option to drive performance improvements.

The extraordinary: Semantics will arise; we‘ll start realizing the power of consistent tagging, and start being able to meta-process content to do smart things on our behalf.   And we‘ll start seeing cloud-hosting as a new vehicle for learning services.

I’ve been over-optimistic in the past, for example continuing to believe mobile will make it’s appearance (and it is, but not in the big leap I hoped).   It’s quietly appearing, but interest isn’t matching the potential I’ve described in various places.   I’m not sure if that’s due to a lack of awareness of the potential, or perceptions of the barriers: too many platforms, insufficient tools.

I continue to see interest in games, and naturally I’m excited.   There is still a sadly-persistent view that it’s about making it ‘fun’ (e.g. tarted up drill and kill), while the real issue is attaching the features that drive games (challenge, contextualization, focus on important decisions) and lead to better learning.   Still, the awareness is growing, and that’s a good thing.

And I’ve been riffing quite a lot recently about social networking (e.g. here), as my own awareness of the potential has grown (better late than never :).   The whole issues of enabling organizational learning is powerful.   And I’ve also previously opined about elearning 3.0, the semantic web, so I’ll point you there rather than reiterating.

So there you have it, my optimistic predictions. I welcome your thoughts.

Thinking & Learning

19 December 2008 by Clark 4 Comments

Today I stumbled across two interesting articles.   Both talk about some relevant research on learning, and coincidentally, both are by folks I know.

An alumni bulletin mentioned research done by Hal Pashler (who was a new professor while I was a grad student; I was a teaching assistant for him, and he let me give my first lecture in his class), and talks about the intervals necessary for successful learning.   Will Thalheimer has done a great job publicizing how we need to space learning out, and this research was interesting for the the length of time recommended.

The study provided obscure information (true but unusual), with an initial study, subsequent re-study, and then a test, with varying intervals between the study periods, and between the second study and the test (up to a year).   The article implied the results for studying (no new news: cramming doesn’t work), but the implications for organizational learning.   The interesting result is the potential length of time between studying and performance.

“If you want to remember information for just a week, it is probably best if study sessions are spaced out over a day or two.   On the other hand, if you want to remember information for a year, it is best for learning to be spaced out over about a month.”

Extrapolating from the results, he added, “it seems plausible that whenever the goal is for someone to remember information over a lifetime, it is probably best for them to be re-exposed to it over a number of years.”

“The results imply,” said Pashler, “that instruction that packs a lot of learning into a short period is likely to be extremely inefficient, at least for remembering factual information.”

This latter isn’t new information, but does fly in the face of much formal training conducted on behalf of organizations.   We’ve got to stop massing our information in single event workshops, and starting preparing, reactivating, and reactivating again for anything that isn’t performed daily.

Moving from learning to thinking and doing (it’s not about learning after all), the second one concerns research done by Jonathan Schooler (who was a new faculty member where I was doing my post-doc; we published some work we did together with one of his PhD students).   Schooler’s work has been looking at day-dreaming, and found that it’s not a unitary thing, but actually has a couple of different modes, which differ in whether you’re not aware you’re daydreaming or are, instead, mindful of it.   The latter is to be preferred.

In the one where you’re aware you are daydreaming, you can mentally simulate situations and plan what might happen and how to respond, or review what did happen and consider alternatives.   This works for social situations as well as other forms of interactions.   And the results are beneficial: “people who engage in more daydreaming score higher on experimental measures of creativity, which require people to make a set of unusual connections.”

This is what I mean when I talk about reflection, and in the coming times of increasing change and decreasing knowledge half-life, the ability to be creative will increasingly be a competitive advantage.   So, as I’ve said before, do try to make time for reflection.   It works!

Collective intelligence patterns

10 December 2008 by Clark 4 Comments

I had the good fortune to get to meet Tom Malone way back when he was working on what makes computer games fun (cited in my book).   I stopped by PARC (then the geek’s Mecca), and got to bask in the environment that produced the GUI on top of Doug Engelbart’s mouse.

I knew Tom then went on to be a thought leader out of the Sloan School of Management, studying office work and then higher levels of activity, leading to a recent book “The Future of Work”.   I happened to meet him again at an event at IBM’s Almaden Research Center, and he was gracious enough to remember me and discuss his work (I challenged him about his ‘guilds’, since they still can’t get reasonable healthcare that businesses can get, don’t get me started).

I mention this backstory to show the trajectory of thought leadership he’s had (and yet still remain a really nice guy).   He just spoke at the celebration of Doug Engelbart’s work, and while I couldn’t attend, I was looking for blog postings and found his slide deck.

You (should) know I like models, and he’s gone beyond talking about how web 2.0 social networking can facilitate work, to actually analyze and distill some underlying principles. In his presentation on The Landscape of Collective Intelligence, he comes up with four characteristics of design patterns (or genes, as he calls them): What (strategy), Who (staffing), How (structure & process), & Why (incentives/alignment).   This is a really nice systematic breakdown into patterns tied to real examples.

For Who, he distinguishes between a hierarchical arrangement and a crowd, the latter being a more random structure.   He focuses on the latter.   For Why he breaks it out into Money, Love, & Glory.   For What, it’s Create a solution or Decide on an issue.   How is whether you’re having it independent or dependent.   The latter two work out to a nice little matrix with collection, collaboration, many-to-many, and group decision.

I really liked his statement that “failure to get motivational factors right is probably the single greatest cause of failure in collective intelligence experiments”.   That’s insightful, and useful.

The implications for informal learning are obvious, I’ll have to think more about formal learning.   Still, a great foundation for thinking about using networks in productive ways.   Definitely worth a look.

Does Education Need to Change?

21 November 2008 by Clark 5 Comments

George Siemens asks in his blog:

1. Does education need to change?
2. Why or why not?
3. If it should change, what should it become? How should education (k-12, higher, or corporate) look like in the future?

I can’t resist not answering.   1. ABSOLUTELY!   Let me count the ways…

K12 Education is broken in so many ways. We’re not engaging our students in why this is important, we’re not giving them problems to solve that resemble the ones that they’ll face outside, we’re focusing on the wrong skills, we don’t value teachers, we’ve crumbling infrastructure, we’ve beggared the budgets, the list goes on.

We need new curricula and new pedagogy at least. We should be focusing on 21st century skills (not knowledge): systems thinking, design, problem-solving, research, learning to learn, multimedia literacy, teamwork and leadership, ethics, etc; my wisdom curriculum.   We need pedagogies that engage, spiral the learning around meaningful tasks, that develop multiple skills.

We need this at K12, at higher education, and in the workplace.   We need technology skills infused into the curriculum as tools, not as ends in themselves.   We need teachers capable of managing these learning experiences, parents engaged in the process and outcomes, and administrations educational and political that ‘get’ this.   We need learners who can successfully segue into taking control of their learning and destiny.

Yes, a tall order.   But if we don’t, we basically are hobbling our best chances for a better world.   Look, the only way to have functioning societies is to have an educated populace, because you just can’t trust governments to do well in lieu of scrutiny. So, let’s get it started!

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok