Learnlets

Secondary

Clark Quinn’s Learnings about Learning

Social Silliness

28 May 2019 by Clark 1 Comment

It’s that time again. Someone pointed me to a post that touted the benefits of social learning. And I’m a fan!  However, as I perused it, I saw that was a bit of social silliness. So, let me be clear about why.

It starts off mostly on the right foot, saying “playing off of the theory that people learn better when they learn collectively…” I’m a proponent of that theory. There are times when that’s not the most effective nor efficient approach, but there are times when it’s really valuable.

What follows in the article are a series of five tips about applying social learning. And here we go off the rails!  Let’s go through them:

  1. A Facebook Group Or A Forum, or both.  Well, yes, a group is a good idea. But Facebook is  not! Expecting everyone to have to be open to being on Facebook isn’t a good policy. While I’m on Facebook (and no, don’t connect to me there, that’s for personal relationships, not professional ones; go see me on LinkedIn ;), I know folks who aren’t and won’t be. Create your own group in your own tool, so folks know what’s being done with their data!
  2. Leaderboards. What? NOOOO!  That’s so  extrinsic  ;). Seriously, that’s the second most important tip?  Er, not. If you’re not making sure folks are finding intrinsic value in the community, go back and fix it. People (should) come because it’s worth it. Work to make it so. That’s hard, but in the end if you want to build community, start modeling and encouraging sharing, and make it safe.  Don’t do it on points.
  3. Surveys or polls. Ok, let’s put this in context. Yes, getting people to participate and collecting their opinions is good. Is this the third most important tip? No, but no points lost for this suggestion. However, let’s do it right. You can really decrease participation when you’re allowing ‘drive-by’ surveys. Have a policy, be clear, and do it  when it makes sense. This would be a subset of a more general principle about stimulating and leveraging the community, I reckon.
  4. Interactions between the L&D  Team and Employees.  This requires nuance. Not just any interactions. In a sense, L&D should be invisible, the hidden hand that keeps things moving. Facilitating, yes, where someone needs a nudge to contribute, someone else needs a nudge to  not contribute (in that way, or that often, or…), some statement needs some nuance, etc. But ultimately, the community should be interacting with each other, not L&D.
  5. elearning Courses that Require Teamwork. Back to my point above, yes,  sometimes. This is a good idea. And it can build the community skills that will carry over. You want a smooth segue from courses to community. The suggestion included, however, “only that employee can access that particular phase or section” is a lot of extra design. Why not just group assignments with facilitation to participate? It’s not a horrible idea, but not a general one.

Overall, this is nowhere near the first five tips  I would suggest about building community. I agree community’s big, but I’d be pushing:

Start small: get it working somewhere (particularly within L&D), then spread slowly to other groups.

Make it safe: ensure that there’re principles in place about what’s acceptable behavior, and that the relevant leader is sharing. If they don’t, will anyone really believe it’s safe?

Ensure value: make sure that people coming to the community will find reasons to return. To get it to critical mass, you need to nurture it. Start by seeding valuable information over time, and inviting (or incepting) some respected folk to contribute. And the surveys and polls are ways to find out what’s going on and reflect that back.  It takes effort to kick start it, but it’s critical to get people to stay engaged. As part of this:

Enable sharing: the ‘show your work‘ mentality should be encouraged. Get people showing what they’re doing (once it’s safe) enables long term benefits. This will start providing valuable content, and support the organization beginning to learn together.

Persist: success will depend on maintaining the support until the community reaches critical mass. That means a continual effort to make value, surface value until the community is doing this itself.

I’m not saying this is my official list, this is off the top of my head. However, when I look at these two lists, the problem for me is that the top list is tactical, but creating community is really a strategic initiative. Which means, it needs to be treated as such. No social silliness, it needs to be seriously addressed. So, what am I missing?

New reality

22 May 2019 by Clark Leave a Comment

I’ve been looking into ‘realities’ (AR/VR/MR) for the upcoming Realities 360 conference (yes, I’ll be speaking). And I found an interesting model that’s new to me, and of course prompts some thoughts. For one, there’s a new reality that I hadn’t heard of!  So, of course, I thought I’d share.

A diagram from reality, through augmented reality and augmented virtuality, to virtual reality.The issue is how do AR (augmented reality) and VR (virtual reality) relate, and what is MR (mixed reality). The model I found (by Milgram, my diagram slightly relabels) puts MR in the middle between reality and virtual reality. And I like how it makes a continuum here.

So this is the first I have heard of ‘augmented virtuality’ (AV). AR is the real world with some virtual scaffolding. AV has more of the virtual world with a little real world scaffolding. A virtual cooking school in a real kitchen is an example. The virtual world guides the experience, instead of the real world.

The core idea to me is about story. If we’re doing this with a goal, what is the experience driver? What is pushing the goal? We could have a real task that we’re layering AR on top of to support success (more performance support than learning). In VR, we totally have to have a goal in the simulated world. AV strikes me as something that has a virtual world created story that uses virtual images and real locations. Kind of like The Void experience.

This reminded me of the Augmented Reality Games (ARGs) that were talked about quite a bit back in the day. They can be driven by media, so they’re not necessarily limited to locations. A colleague had built an engine that would allow experiences driven by communications technologies: text messages, email, phone calls, and these days we could add in tweets and posts on social media and apps. These, on principle, are great platforms for learning experiences, as they’re driven by the tools you’d actually use to perform. (When I asked my colleagues why they think they’ve ‘disappeared’, the reason was largely cost; that’s avoidable I believe.)

I like this continuum, as it puts ARGs and VR and AR in a conceptually clear framework. And, as I argue for extensively, good models give us principled bases for decisions and design. Here we’ve got a way to think about the relationship between story and technology that will let us figure out what makes the best approach for our goals. This new reality (and the others) will be part of my presentation next month. We’ll see how it manifests by then ;).

Packaging change

21 May 2019 by Clark Leave a Comment

wrapped presentI’ve been looking at a couple of things, with a goal is to look for the sweet spot at the intersection.  I’m looking at my missions, interests, and what’s resonating. And, I find, that they’re converging into a few things. Which I thought I’d make concrete, because I really want to see if these are things that are tangible and valuable. What is the right packaging? I’m asking for your help: is this the right suite, and if not, what do you want?

To start, one of my themes for the year is transformation, about deeper learning design. I’ve argued strongly that we need to do deeper learning design before we worry about tarting it up with personalization/adaptation, VR/AR, AI, etc. It’s time to get serious about actually having an organizational impact! And we’ve converging evidence about what it is.

As triangulation, what’s appearing as interests are those things people are asking for, or are tracking. And I’ve been asked recently (and been happy to oblige) talking about learning science. The eLearning Guild just had a summit, and my learning experience design workshop from Learning Solutions has been again  accepted for DevLearn (and I’d welcome seeing you there!).

We also know what’s largely lacking, and how to help. Through experience, I’ve found there are several ways to make progress. For one, you need the foundational knowledge, and it really needs to be shared and agreed in the organization. For another, you can benefit from a clear understanding of your current state. You can’t move forward if you don’t know where you are! Then, you need a clear plan that gets you from where you are to where you can be, that’s right for you. No ‘best practices’, but a principled  approach, looking at the bigger picture. Finally, support in moving forward can be valuable. There are ways you can fall back or barriers can hinder you that you need fresh thinking to address.

So the offer involves any combination of the following things:

Workshop: we actively explore and bring to bring it to life the necessary knowledge, and then practice applying it. This brings a shared vocabulary and understanding of what needs to change and why.

Assessment: an independent assessment of where you are in your processes, and what are the opportunities for change. The goal is to identify the minimal interventions that can have the biggest impact.

Strategy Session: here the goal is to determine the path to change. What are the opportunities, barriers, and what are the sequence of moves that create the change? It’s about understanding context and opportunity, bringing in the best principles, and using them as a guide to move forward.

Coaching: here we provide the lightest weight support that will keep momentum. In my experience, it’s been easy for folks to fall back into prior thinking without an ongoing stimulus, and the ability to comment early on in a plan on interim moves help keep a strategy on track.

These can manifest in several ways:

  • a learning science workshop for the team and an evaluation of your design process for the small changes with the big impact
  • the assessment, a strategy session for improvements, and a termed coaching engagement to support success

Your situation would make a particular combination more sensible. They’re better together, but any one is a catalyst for improvement. And these are all things I’ve done with organizations and have had success with. Each alone is done for quite less than $10K (parameters vary), but the goal is to make these very accessible. And, of course, substantial discounts for taking on more than one (to make the change more likely to stick).

I note that my other theme for the year is ‘intellectricity‘, unpacking the power of your people in informal learning. While I’m helping organizations around this as well, I haven’t yet formalized it like this. Yet it’s clear each could be done in the above formats as well, and I’m happy to make the same offer. And there seems to be growing interest in this area as well.

The reason I’m putting this out there, however, is because I want feedback and/or uptake. It’s not enough to just encourage, I want to actually support meaningful change! I have strong grounds to believe these are important and necessary changes, and I want to help make it happen, the more the faster the better. And if this isn’t the packaging you expect, let me know. I’m happy to discuss and adapt. What I want to do is have an impact, so help me figure out how.

Learning Lessons

16 May 2019 by Clark 1 Comment

Designing mLearning bookSo, I just finished teaching a mobile learning course online for a university. My goal was not to ‘teach’ mobile so much as develop a mobile mindset. You have to think differently than what the phrase ‘mobile learning’ might lead you to think. And, not surprisingly, some things went well, and some thing didn’t. I thought I’d share the learning lessons, both for my own reflection, and for others.

As a fan of Nilson’s Specifications Grading, I created a plan for how the assessment would go. I want lots of practice, less content. And I do believe in checking knowledge up front, then having social learning, and a work product. Thus, each week had a repeated structure of each element. It was competency based, so you either did it or not. No aggregation of points, but instead: you get this grade if you do: this many assignments correct, and  write a substantive comment in a discussion board  and comment on someone else’s this many times,  and complete this level on this many knowledge checks. And I staggered the deadlines through the week, so there’d be reactivation. I’ve recommended this scheme on principle, and think it worked out good in practice, and I’d do it again.

In many ways it ‘teacher proofs’ the class. For one, the students are giving each other feedback in the discussion question. The choice of discussion question and assignment both were designed to elicit the necessary thinking, which makes the marking of the assignment relatively easy. And the knowledge checks set a baseline background. Designing them all as scenario challenges was critical as well.

And I was really glad I mixed things up.  In early weeks, I had them look at apps or evaluated ones that they liked. For the social week, I had them collaborate in pairs. In the contextual week, they submitted a video of themselves. They had to submit an information architecture for the design week. And for the development week, they tested it.  Thus, each assignment was tied to mobile.

It was undermined by a couple of things. First, the LMS interfered. I wrote careful feedback for each wrong answer for each question on the knowledge checks. And, it turns out, the students weren’t seeing it!  (And they didn’t let me know ’til the 2nd half of the abbreviated semester!) There’s a flag I wasn’t setting, but it wasn’t the default!  (Which was a point I then emphasized in the design week: start with good defaults!)

And, I missed making the discussions ‘gradeable’ until late because of another flag. That’s at least partly on me. Which meant again they weren’t getting feedback, and that’s not good. And, of course, it wasn’t obvious ’til I remedied it. Also, my grading scheme doesn’t fit into the default grading schema of the LMS anyways, so it wasn’t automatically doable anyways. Next time, I would investigate that and see if I could make it more obvious. And learn about the LMS earlier. (Ok, so I had some LMS anxiety and put it off…)

With 8 weeks, I broke it up like this:

  1. Overview: mobile is  not courses on a phone. The Four C’s.
  2. Formal Learning:  augmenting learning.
  3. Performance Support: mobile’s natural niche
  4. Social: connecting to people ‘on the go’
  5. Contextual: the unique mobile opportunity
  6. Design: if you get the design right…
  7. Development: practicalities and testing.
  8. Strategy: platform and policy.

And I think this was the right structure. It naturally reactivated prior concepts, and developed the thinking before elaborating.

For the content, I had a small set of readings. Because of a late start, I only found out that I couldn’t use my own mLearning book when the bookstore told me it was out of print (!). That required scrambling and getting approval to use some other writings I’d done. And the late start precluded me from organizing other writings. No worries, minimal was good.  And I wrote a script that covered the material, and filmed myself giving a lecture for each week. Then I also provided the transcript.

The university itself was pretty good. They capped the attendance at 20. This worked really well. (Anything else would’ve been a deal breaker after a disaster many years ago when an institution promised to keep it under 32 and then gave me 64 students.)  And there was good support, at least during the week, and some support was available even over the weekend.

Overall, despite some hiccups and some stress, I think it worked out (particularly under the constraints). Of course, I’ll have to see what the students say. One other thing I’d do that I didn’t do a good job of generally (I did with a few students) was  explain  the pedagogy. I’ve learned this in the past, and I should’ve done so, but in the rush to wrestle with the systems, it slipped through the cracks.

Those are my learning lessons. I welcome  your feedback and lessons!

Shaming, safety, & misconceptions

14 May 2019 by Clark 1 Comment

Another twitter debate, another blog post. As an outgrowth of a #lrnchat debate, a discussion arose around whether making errors in learning could be a source of shaming. This wasn’t about the learners, however, being afraid of being shamed. Instead it was about whether the designers would feel proscribed from  making real errors because of their expectation of learner’s emotions. And, I have strong beliefs about why this is an important issue. Learners should be making errors, for important reasons. So, we need to make it safe!

The importance of errors is in the fact that we’d rather make them in practice than when it counts. Some have argued that we literally  have to fail to be ready to learn. (Perhaps almost certainly if the learners are overconfident.) The importance to me is in misconceptions. Our errors don’t tend to be random (there is some randomness), but instead are patterned. They come from systematic ways of perceiving the situation that are wrong. They come from bringing in the wrong models in ways that seem to make sense. And it’s best to address them by being able to make that choice, and getting feedback about why that’s wrong.

Which means learners  will have to fail. And they should be able to make mistakes. (Guided) Exploration is good. Learners should be able to try things out, see what the consequences are, and then try other approaches. It shouldn’t be a free-for-all, since learners can not explore systematically. Instead, as I’ve said, learning  should be designed action and guided reflection. And that means we should be designing in these alternatives to the right action as options, and provide specific feedback.

So, if they’re failing, is that shaming? Not if we do it right. It’s about making failing  okay.  It’s about making the learning experience ‘safe‘. Our feedback should be about the decision, and why it’s wrong (referring to the model). We might not give them the right answer, if we want them to try again. But we don’t make it personal, just like good coaching. It’s about what they did, not who they are. So our design should prevent shaming, but by making it safe to fail, not preventing failure.

The one issue that emerged was that there was fear that the designers (or other stakeholders) might have fear that this could be emotionally damaging, perhaps from fears of their own. Er, nope! It’s about the learning, and we know what research tells us works. We have to be responsible to be willing to do what’s right, as challenging as that may be for any reason. Time, money, emotions, what have you. Because, if we want to be responsible stewards of the resources entrusted to us, we should be doing what’s known to be right. Not chasing shiny objects. (At least, until we get the core right. ;)

So, let’s not shame ourselves by letting irrelevant details cloud our judgment. Do the right thing. For the right reasons. We know how to be serious about our learning. Make it so.

Facilitate is the new train

9 May 2019 by Clark Leave a Comment

Boy riding bike with training wheelsOk, so I’m being provocative with the title, since I’m not advocating the overthrow of training. The main idea is that a new area for L&D is facilitation. However, this concept also updates training. It’s part of what I was arguing when I suggested that the new term for L&D should be P&D, Performance & Development. So let’s start with that. We need to facilitate in several directions!

The driver behind the suggested nomenclature change is that the focus of L&D needs a shift. The revolutionary point of view is that organizations need both optimal execution and continual innovation (read: learning). In this increasingly chaotic time, the former is only the cost of entry, but it can’t be ignored. The latter is also becoming more and more critical!

A performance focus is the key to execution. You want to ensure people are doing what’s known about what’s need to be done. That’s the role of instruction and performance support. Performance consulting is the way to work backwards from the problem and determine the best interventions to do that optimization.

However, learning science is pushing us to recognize that we can do better. Information dump and knowledge test isn’t going to lead to any change in behavior. If you want people to be able to  do, you have to have them  do in practice. Which means the focus is on the practice and the feedback. That latter is facilitation. The clichéd switch from sage on the stage to guide on the side does capture it. So even here we see the need for facilitation.

It’s in the latter, however, where facilitation really comes to the fore. When we talk about development, we’re going beyond developing the individual. We are addressing the organization’s learning. And, as I’ve said, innovation  is learning, just a different sort. What’s needed is  informal learning.

And informal learning, while natural, isn’t always optimal. Habits, misconceptions, culture, and more can intrude. This is why facilitation may be even more key to success for organizations.

And, again, L&D  should be the most knowledge about learning, because learning underpins both performance and development.    Thus, if L&D is going to adapt, learning how to facilitate learning will be core. Facilitate really will be the new ‘train’.

Hub or spoke?

7 May 2019 by Clark 2 Comments

How are learning design teams are distributed (or not) in an organization? I’ve seen both totally separate teams in organizations (spoke), and twagon wheelotally central ones (hub), and of course gradations in between.  While size of the organization is one driver, there are tradeoffs in efficiencies and effectiveness. And, I think tech can help. How?

So, to start, this has been an ongoing debate. I cynically (who, me? :) suspect that when a new manager comes in, whatever it is that’s been done, they have to do the opposite. Something must be done, right away!  More seriously, there are strengths to either.

Distributed teams as closer to their partners. They have greater internal knowledge, and can be more responsive. Central teams make it easier to maintain quality. You don’t get driven as much by differing team cultures and  can maintain a bastion of quality. Similarly, you can often find efficiencies from scale and lack of redundancy.  And sometimes, you can have distributed teams taking advantage of some shared resources such as video production.

However, I was pondering how we can use technology to help break through the tradeoffs. As we build a community  around the design of learning, the teams can be distributed as long as they’re continuing to learn together.  If the community is continuing to learn together, showing their work and lessons learned, and regularly connecting whether through lunch-and-learns, offsites, or what have you, the shared learnings don’t need to come from physical proximity.

Building culture is hard, but as I’ve argued elsewhere, L&D really should take ownership of the new ways of working  first, before proselytizing it elsewhere. Thus, L&D should be practicing the principles of a learning culture. Then, it really doesn’t matter if you’re hub  or spoke, or anything in-between, because you  are a community.

Competencies for L&D Processes?

1 May 2019 by Clark Leave a Comment

We have competencies for people. Whether it’s ATD, LPI, IBSTPI, IPL, ISPI, or any other acronym, they’ve got definitions for what people should be able to do. And it made me wonder, should there be competencies for processes as well? That is, should your survey validation process, or your design process, also meet some minimum standards?  How about design thinking? There are things you  do  get certified in, including such piffle as MBTI and NLP.  So does it make sense to have processes meet minimum standards?

One of the things I do is help orgs fine-tune their design processes. When I talk about deeper elearning, or we take a stand for serious elearning, there are nuances that make a difference. In these cases, I’m looking for the small things that will have the biggest impact. It’s not  about trying to get folks to totally revamp their processes (which is a path to failure).  Yet, could we go further?

I was wondering whether we should certify processes. Certainly, that happens in other industries. There are safety processes in maintenance, and cleanliness in food operations, and so on. Could and should we have them for learning? For performance consulting, instructional design, performance support design, etc?

Could we state what a process should have as a minimum requirement? Certain elements, at least, at certain way points? You could take Michael Allen’s SAM and use it as a model, for instance. Or Cathy Moore’s Action Mapping. Maybe Julie Dirksen’s Design For How People Learn could be created as such. The point being that we could stipulate some way points in design that would be the minimum to be counted as sufficient for learning to occur. Based upon learning science, of course. You know, deliberate and spaced practice, etc.

Then the question is, should we? Also, could we agree? Or, of course, people could market alternative process certifications. It appears this is what Quality Matters does, for instance, at least K12 and higher ed. It appears IACET does this for continuing education certification. Would an organization certification matter? For customers, if you do customer training? For your courses, if you provide them as a product or service? Would anyone care that you meet a quality standard?

And it could go further. Performance support design, extended learning experience design (c.f. coaching), etc.  Is this something that’s better at the person level than the process level?

Should there be certification for compliance with a competency about the quality of the learning design process? Obviously in some areas. The question is, does it matter for regular L&D? On one hand, it might help mitigate against the info dump/knowledge test courses that are the bane of our industry. On the other hand, it might be hard to find a workable definition that could suit the breadth of ways in which people meet learning needs.

All I know is that we have standards about a lot of things. Learning data interchange. Individual competencies. Processes in education. Can and should there be for L&D processes? I don’t know. Seriously. I’m just pondering. I welcome your thoughts.

Surprise and safety

30 April 2019 by Clark Leave a Comment

As I reflect further on the improved surprise model, I realize there’s one thing I missed. The model gives a motivation for learning, and an implication for design. But there’s one thing more in the model, and one more implication for design. And this has to do with safety.

So, first, the initial model says that we learn to  minimize surprise. We’re driven to remove the mismatch between what we expect and what occurs. This  could lead to a desire to do nothing, or as little as possible, but a further elaboration says we also want to maximize outcomes. Thus, we won’t just sit around, but explore.

That means that helping learners know 2 things: that they want to know this (it’s to optimize what they care about), and that they don’t know it (the gap they have to minimize). If we do that, they’re ready to learn. But there’s one more thing.

We  won’t explore other alternatives to see if they’re a better solution if the consequences are high. We’ll only explore if the cost of this exploration isn’t higher than the benefits we gain if it’s better. So that one other things is safety.  If it isn’t safe, we’ll stick with the known solution.

Which means that we need to make it safe to explore in our learning.  And, that includes both formal learning and  informal.  Mistakes in learning must be expected and accepted. In formal learning, mistakes are learning opportunities. Have alternatives that represent reliable ways folks go wrong, and it’s ok if they choose those because you have feedback specifically for that selection. And informally, mistakes (not the same ones, or obvious ones, there’s accountability too) are fine when the lesson’s learned.

Understanding how, and why,  we learn is critical to optimizing learning. And I think that’s a valuable goal. It’s too important to leave to chance, or old habits. It’s time to be alert to what we know, and put it into practice.

What’s the next buzzword?

24 April 2019 by Clark 2 Comments

I was perusing an old list of potential column topics, and came across one that asked about MOOCs. Now, you probably recognize that the term is pretty much evaporated from any list of top L&D concerns. That’s kind of funny, to me (ok, so you may question my sense of humor).  And it makes me wonder what topics are current and are on the horizon. What is the coming buzzword?

I talked in a column about the problem with chasing shiny objects. In short, it’s easy to get swayed by the latest hot topic, and want to be seen to be on top of things. But, as I’ve said repeatedly, a gilded dud is still a dud. If we get the core right first,  then we can move on to see what’s real. And, of course, we need to dig into the real affordances, not just the hype (PowerPoint in Second Life, anyone?).

So there are some buzzwords already on the wane. Such as MOOCs. A good sign is if someone’s trademarked it, it’s jumped the shark. Frankly, that already characterizes  microlearning. And we’ve had someone recently claim to have invented workflow learning (though it’s been talked about for years).  When they’re fighting about ownership, it’s done.

What’s waxing as opposed to waning?  How about ‘bots?  That’s the topic du jour! Often, as part of AI; as is Machine Learning, Deep Learning, and so on. Also Analytics (I think Big Data is already in the last paragraph’s category). Not necessarily bad, but part of this phenomenon is a lack of clarity about what we mean when we use any of these terms.  So, maybe it is like AI: if you know what you’re talking about, it’s no longer new and shiny! And of course, AR and VR are very much  now. And personalized and adaptive! (Time for some ownership moves!)

So here’s the question: what’s the  next buzzword? Would that it were learning science!  Ok, there’s been a bit of a resurgence (time to plug the coming Science of Learning  Summit, with the usual caveat), but not near enough. C’mon, folks, lets get together and work on taking your design approaches and tuning them up!  Of course, I could wish we’re talking IA instead of AI, too. What else? Contextual. Content Systems. Those are my thinking (and I’ve been talking about these things for years; maybe it’s time).

So, what’s on your list? What’s next? What’s ready for primetime? Wearables? Post-AI? (I just made that up.)  I look forward to hearing your thoughts!

 

« Previous Page
Next Page »

Clark Quinn

The Company

Search

Feedblitz (email) signup

Never miss a post
Your email address:*
Please wait...
Please enter all required fields Click to hide
Correct invalid entries Click to hide

Pages

  • About Learnlets and Quinnovation

The Serious eLearning Manifesto

Manifesto badge

Categories

  • design
  • games
  • meta-learning
  • mindmap
  • mobile
  • social
  • strategy
  • technology
  • Uncategorized
  • virtual worlds

License

Previous Posts

  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • May 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • May 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009
  • April 2009
  • March 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008
  • May 2008
  • April 2008
  • March 2008
  • February 2008
  • January 2008
  • December 2007
  • November 2007
  • October 2007
  • September 2007
  • August 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006

Amazon Affiliate

Required to announce that, as an Amazon Associate, I earn from qualifying purchases. Mostly book links. Full disclosure.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok